Troubleshooting Answers
Diagnose low coverage, weak citations, mixed projects, and source-quality problems before you blame the model.
When MARCUS gives a weak answer, the underlying cause is often fixable. The important move is to diagnose the right layer of the problem. Sometimes the issue is the question. Sometimes it is the project boundary. Sometimes it is the source quality. Only occasionally is the model itself the main problem.
This guide gives you a practical recovery sequence.
Start With The Symptom
| Symptom | Likely causes | First check |
|---|---|---|
| No citations or weak citations | Thin corpus, low match quality, or missing source | Did the relevant source exist in the project and finish indexing? |
| Low coverage | Not enough relevant material or question too broad | Is the corpus big enough and scoped correctly for this question? |
| Generic answer | Broad query, mixed project, or weak source set | Can the question be made more concrete? |
| Conflicting answer | Corpus contains disagreement or multiple policy layers | Which source should probably control in this context? |
| New upload never shows up | Indexing incomplete, wrong project, or poor source match | Is the new source ready, and does it clearly address the question? |
| Briefing looks wrong | Parsing noise, poor scan quality, or unusual document structure | Does the original file look clean and readable? |
A Reliable Recovery Sequence
When an answer looks weak, work through these steps in order:
- Check whether the relevant sources are indexed.
- Check whether the project scope is clean.
- Check whether the question is specific enough.
- Check the authority and quality of the cited sources.
- Open the Library or briefings to inspect the corpus directly.
- Only after that should you conclude there is a deeper system failure.
This order matters. It prevents you from trying to prompt around a corpus problem.
Problem: No Direct Citations
If the answer does not meaningfully cite the evidence, common reasons include:
- the relevant source is missing
- the source exists but is not indexed yet
- the question is so broad that retrieval is diffused
- the available sources are poor matches for the request
What to do
- Confirm the source is present and ready.
- Rephrase the question around a concrete threshold, criterion, or step.
- Ask for the specific source text if needed.
- If no source should answer the question, add the missing document rather than pushing harder on the same query.
Problem: Low Coverage
Low coverage usually means MARCUS found limited evidence. This does not automatically mean the answer is wrong, but it does mean the corpus may not support a confident synthesis.
Common reasons
- only one document addresses the question
- the uploaded sources are too sparse
- the project is mixed, so the relevant source competes with unrelated material
- the question is asking for nuance the corpus does not contain
What to do
- Narrow the question
- Add missing documents
- Split the project if the corpus is mixed
- Treat the answer as preliminary until the source base is stronger
Problem: The Answer Feels Too Generic
Generic answers often come from generic questions. But they can also come from a project that contains many loosely related documents.
Better approach
Instead of:
Tell me about wound care
Try:
What dressing change timing is recommended after uncomplicated closure?Which source gives the threshold for escalating postoperative fever after POD2?How do the uploaded protocols differ on discharge criteria?
If the answer stays generic even after a precise question, inspect the project boundary next.
Problem: Sources Conflict
Conflicts are not always an error. They may reflect:
- local policy versus external evidence
- old versus new versions
- different patient populations
- different stages of care
What to do
- Ask MARCUS to compare the sources directly.
- Open both cited passages.
- Decide which source should carry more weight in your context.
- If one source is outdated or misplaced, clean the project.
Do not circulate a synthesized answer that hides the disagreement.
Problem: The New Source Never Matters
This is common after upload and usually means one of three things:
- the source is not fully indexed yet
- the question does not actually match the new source
- the project contains stronger or more numerous competing sources
Quick test
Ask a question that only the new document should answer. If MARCUS still ignores it after indexing, inspect the source briefing and file quality.
Problem: The Briefing Looks Wrong
When the briefing seems obviously off, think first about input quality.
Common causes:
- low-quality scan
- malformed export
- document with unusual structure
- wrong file uploaded
What to do
- Open the original file.
- Confirm the title and content are what you intended.
- Re-upload a cleaner version if needed.
- Re-test with a narrow question after indexing.
Problem: The Project Has Become Chaotic
You may notice:
- surprising citations
- repeated confusion across topics
- too many document types mixed together
- users asking the same question in increasingly specific ways just to force the right source
At that point, the problem is usually not a single answer. The project itself needs restructuring.
Recovery plan
- Stop adding new files temporarily.
- Identify the natural topic groups.
- Create cleaner replacement projects.
- Re-upload the most important sources first.
- Re-test the core questions.
When To Escalate
Escalate beyond normal user troubleshooting when:
- a clean, indexed, high-authority source is consistently ignored for a clearly matching question
- multiple users reproduce the same issue across projects
- ingestion repeatedly fails on valid documents
- the UI is showing stale or contradictory status information
- the answer cites passages that obviously do not support the claim
Those situations may indicate a product defect rather than normal corpus quality issues.
One Safe Default
If you cannot tell whether the problem is the question, the source, or the project, inspect the corpus before asking the model to try again. In MARCUS, evidence quality and corpus structure usually determine the ceiling of answer quality.