Skip to main content

Quick Diagnosis

API Errors

Common errors (Add)

Status CodeError TypeCommon CauseRecommended Fix
400BadRequestInvalid JSON or missing required fields (e.g., source).Validate your payload against the schema. Ensure documents array is not empty.
401UnauthorizedInvalid or missing ALCHEMYST_AI_API_KEY.Check your .env file. Ensure you aren’t using a Test key in Prod.
403ForbiddenAccessing a scope (e.g., user_123) you don’t have permission for.Verify the api_key or user_id matches the authenticated session. Make sure the key is still valid.
409ConflictDocument with the same fileName already exists.Follow Pattern 2: Document Updates to update or replace existing documents safely.
413PayloadTooLargeDocument exceeds the 50MB limit or batch size > 100.Split your documents array into chunks of 50 and retry.
429RateLimitExceeded 1000 requests/minute.Implement exponential backoff (default in SDK). Contact support for higher limits.
422UnprocessableThe file type provided in metadata matches source but content is unparseable.Ensure fileType matches the actual binary content (e.g., don’t send PDF bytes as text/plain).
Status CodeError TypeCommon CauseRecommended Fix
400BadRequestMissing or malformed search parameters.Ensure query is provided and properly formatted.Validate your payload against the schema
401UnauthorizedInvalid or missing ALCHEMYST_AI_API_KEY.Confirm the request includes a valid API key.
403ForbiddenSearching a scope you don’t have access to.Make sure the key is still valid. Ensure the search scope is accessible to the current user or org.
429RateLimitToo many search requests.Add throttling or cache repeated searches.
422UnprocessableDocuments were not indexed correctly.Verify documents were successfully ingested before searching.

Application-Level Issues

Performance issues

Slow Ingestion

Diagnosis:
// ❌ Sequential adds: ~30 seconds for 1000 docs
for (let i = 0; i < 1000; i++) {
  await alchemyst.add(documents[i]);
}

// ✅ Bulk adds: ~3 seconds for 1000 docs (10x faster!)
await alchemyst.bulkAdd(documents);
Common Causes:
  • Document size > 50MB (max file size - 50MB)
  • Large number of documents uploaded sequentially
  • Token limit reached
Solutions :
  1. Make sure the doc size is less then 50MB
  2. Upload documents via bulkAdd (faster)
  3. Check if you have sufficent tokens
  4. For bulk operations read here

Slow Searches (> 1 second)

Quick Checks:
// Measure your query time
const start = Date.now();
const result = await alchemyst.v1.context.search({
  groupName: ["a", "b", "c", "d", "e", "f", "g"] // 7 filters!
});
console.log(`Search took ${Date.now() - start}ms`);
Common Causes:
  • 7+ groupName tags (exponential overhead)
  • No minimum_similarity_threshold set
  • 10K+ documents in scope
Solutions:
  1. Reduce to 3-5 groupName tags max
  2. Set minimum_similarity_threshold: 0.5
  3. Use namespaces to shard data

Retrieval Quality Issues

No Results Returned

Quick Checks:
// 1. Verify documents exist
const docs = await alchemyst.v1.context.view();
console.log(`Total docs: ${docs.length}`);

// 2. Try broader search
const result = await alchemyst.v1.context.search({
  query: "your query",
  similarity_threshold: 0.5, // Lower threshold
  groupName: ["broad_tag"] // Fewer filters
});
Common Causes:
  • High minimum_similarity_threshold set
  • groupnames don’t intersect
  • Query is not relevant or very specific
Solutions :
  1. Verify documents exist : platform
  2. Reduce similarity_threshold from 0.6-0.7 to 0.5-0.6
  3. Remove overly specific groupNames
  4. Try with a general query, make sure it’s not too specific

Common Scenarios

Scenario: Queries Return Too Many Fragments

  • Problem: Search returns 50+ small chunks
  • Cause: Over-segmentation (see Anti-Pattern 1)
  • Solution: Combine related content into cohesive documents

Scenario : Metadata Bloat Symptoms

  • Problem: Storing too much or redundant information in metadata
  • Cause: Metadata bloating (see Anti-Pattern 2)
  • Solution: Limit metadata to only what you query/filter by, pass all the other things in the content itself.

Debugging Tools

Platform Limits

FeatureLimit / Specification
Max File Size50 MB per document
Max Batch Size100 documents per add() request
Supported File Types.pdf, .txt, .docx, .md, .json, .csv
Token Limit8,192 tokens per document chunk
Metadata FieldsMax 20 keys per document; Values must be string or number
Indexing Speed~120 records/sec (Text), ~5 sec/page (OCR PDF)

Still Stuck?

  • Check our Discord for community help