Create, read, and modify files
Use Claude Code to create new files, read existing files, and modify file contents — then verify every result.
Lesson outcome
You will be able to use Claude Code to create, read, and modify files on your computer, and you will know how to verify that the results are correct.
Why this matters in an agency
Creating and modifying files is the most fundamental thing Claude Code does. Every deliverable you produce with this stack — reports, SOPs, proposals, scripts, configuration files — starts as a file operation. If you can create a file, read it, modify it, and verify the result, you can do almost anything with Claude Code. The rest is just context and specificity.
Inputs, tools, and prerequisites
Claude Code installed and authenticated. A working directory (your vault folder). The previous two lessons completed so you are comfortable with the interface.
Step-by-step walkthrough
Create a new file
Start Claude Code in your vault directory. Then ask it to create a file:
```
Create a new note called "My Agency Overview" with a brief template that includes sections for: company name, services offered, target market, and current team size. Leave the sections empty for me to fill in later.
```
Claude Code will propose using the Write tool to create a new file. It will show you the filename and the proposed content. Read the content before you approve. Does it match what you asked for? Are the sections there? Is the formatting reasonable?
Allow the tool use. Claude Code creates the file.
Verify the file was created
Verify independently. Open Obsidian and look in your vault. The "My Agency Overview" note should appear in the sidebar. Open it and confirm the content matches what Claude Code showed you.
You can also verify through Claude Code itself:
```
Read the My Agency Overview file and show me exactly what is in it.
```
Claude Code reads the file back to you. The content should match what was written.
This two-step verification — check in Obsidian and check through Claude Code — is a habit worth building. Trust but verify. Claude Code is accurate the vast majority of the time, but your verification catches the occasional mistake and, more importantly, builds your confidence in the tool.
Read an existing file
Ask Claude Code to read a file that already exists:
```
Read the test note I created earlier and tell me what it says.
```
Claude Code reads the file and summarizes or quotes the content. Compare what Claude Code reports to what you know the file contains. Basic accuracy check.
Modify a file
Ask Claude Code to change something:
```
Add a new section at the bottom of the My Agency Overview note called "Next Steps" with a single bullet point that says "Complete the F1 course and set up my VPS."
```
Claude Code proposes an edit. It shows you the specific text it wants to add and where in the file it will go. Read the proposed change carefully. Does it add the right text? Does it go in the right place? Does it leave the rest of the file untouched?
Allow the edit. Then verify in Obsidian — open the file and confirm the new section is there with the correct text.
Try a vague prompt and see what happens
Intentionally give Claude Code a vague instruction:
```
Fix my agency overview.
```
Notice how Claude Code responds. Without specific instructions, it will either ask you clarifying questions or make assumptions about what "fix" means. This is an important lesson: Claude Code is not a mind reader. The quality of its output depends directly on the quality of your input.
Compare these two prompts:
- Vague: "Make the agency overview better."
- Specific: "In the My Agency Overview note, add a bullet point under Services Offered that says 'Local SEO for home service contractors.'"
The second prompt produces exactly what you want. The first prompt produces guesswork. This distinction — vague versus specific — is the most important skill you will develop throughout this course. Every lesson after this one builds on it.
Understand the cost of vagueness
When you give Claude Code a vague prompt, one of two things happens. Either Claude asks for clarification (good — it is surfacing your missing input) or it guesses and produces something you did not want (bad — you now have to review, reject, and try again). Both waste time. The fastest path to a useful result is a specific prompt that includes what you want, where you want it, and how you will know it is correct.
This does not mean your prompts need to be long. Short and specific beats long and vague. "Add 'Local SEO for home service contractors' as a bullet under Services Offered in My Agency Overview" is short, specific, and will produce the right result every time.
Failure modes and verification checks
The main failure mode is approving tool use without reading what Claude Code plans to do. Slow down and read the proposed action before you approve it. Another failure mode is not verifying the result — trusting that it worked without checking. A third is getting frustrated by vague-prompt failures instead of recognizing them as a prompt quality problem.
Verification: open the files in Obsidian and confirm they contain exactly what you expected. If anything is wrong, tell Claude Code what is wrong and ask it to fix it. That is the normal workflow — create, verify, correct.
Implementation checklist
- Create a new file using Claude Code.
- Verify the file exists and contains the right content in Obsidian.
- Read an existing file through Claude Code and confirm accuracy.
- Modify a file and verify the change in Obsidian.
- Give a vague prompt and observe the difference versus a specific prompt.
- Understand that prompt specificity determines output quality.
Immediate next action
Fill in the "My Agency Overview" note with your actual business information. You can do it manually in Obsidian or through Claude Code by describing your business and asking it to populate each section. Either way, you now have one real note in your vault with actual business content. That is the beginning of the memory layer.
Exercise
Try this prompt-comparison exercise. Ask Claude Code to do the same task two different ways:
First prompt:
```
Write me an email to a potential client.
```
Second prompt:
```
Write a follow-up email to a potential client who runs a garage door repair company in Austin, TX. We met at a networking event last Tuesday. He was interested in our local SEO service. The tone should be professional but friendly. Include a specific next step of scheduling a 15-minute call this week.
```
Compare the two outputs. The first will be generic and nearly useless. The second will be specific, relevant, and close to something you could actually send. The difference is not Claude's intelligence — it is the quality of the input you gave it.
Verify both outputs by asking yourself: could I actually send this email to a real person? The answer for the second prompt should be much closer to yes. This prompt-comparison pattern is something you will use throughout the course whenever you want to understand the boundary between useful and useless output.