Saturday, March 28, 2026

7 Levels of Claude Code

 https://www.youtube.com/watch?v=Y09u_S3w2c8

00:00 - Every Level of Claude Code Explained in 39 Minutes 00:41 - Level 1: The One Thing You Need To Know 03:25 - Level 2: Personalizing CLAUDE for better responses 10:25 - Level 3: Slash Commands, Skills & Hooks - Repeatability 21:21 - Level 4: Connecting Claude Code to Your Apps (MCPs) 26:18 - Level 5: Move from Executor to Supervisor 29:34 - Level 6: Agent Teams 35:57 - Level 7: Fully Autonomous Systems

https://scrapeshq.notion.site/Every-Level-of-Claude-Code-Explained-in-39-Minutes-3007821711818048b119f78d5cc725ff

Level 1 : Prompt

  • first run in plan mode
  • then commit




















The askUserQuestion is used to build context. By asking questions on assumptions.

The AskUserQuestion tool is an interactive feature within Claude Code (introduced in v0.21) that allows the AI to pause execution and ask the user clarifying questions when it is uncertain about the user's intent or needs further direction. It is designed to gather user preferences, define requirements, or make implementation choices, reducing hallucinations and improving the accuracy of code generation.

Key Features & Functionality:
  • Interactive Interface: Presents between one and four questions at once, supporting single-select, multi-select, and custom input ("Other").
  • Plan Mode Integration: Active during "Plan Mode," where Claude asks questions to confirm requirements before executing changes.
  • Clarification Capabilities: It can display markdown snippets, including code or diagrams, to help the user understand the context of the question.
  • Error Handling: Users can respond with custom text to provide more details if the predefined options are insufficient.

Usage in Workflow:
  • Preventing Assumptions: It forces a shift from AI assumption-making to a confirmation-seeking behavior.
  • Best Practice: Developers often prompt Claude to "ask me if anything is unclear" to trigger this tool, especially before starting large tasks.
  • Limitations: The tool works primarily at the command level (main thread) and cannot be used within subagents.

Common Use Cases:
  • Asking for verification on which files to modify.
  • Probing for environment details (browser, OS, network).
  • Reviewing work flows and seeking approval before writing final code

Level 2 : Claude.md (follows your personalization & rules )

  • understand rules
  • how you want to work with it ( tech stack , preferences , mistakes you do not want to repeat )













Golden Rule : The golden rule: Short. Specific. Only what Claude can't figure out itself.

is ready by /init command 

Level 3 : Repeatability (Slash Commands, Skills & Hooks)

  • Slash commands
    • is saved prompts which you can reuse
    • it is like pressing a button
  • Skills
    • https://skillsmp.com/ 
    • Background knowledge Claude loads automatically when it's relevant (based on project description and skill description) . You don't trigger these — Claude just knows to use them when it needs to.
    • still represented as slash commands
    • Unlike commands, skills can include a whole folder of supporting files — example posts, style guides, reference docs — giving Claude much richer context to work from
      • brand-voice : your tone, banned words, sentence style, plus a folder of 10 example posts that nail your voice. Claude pulls this in whenever it's writing any content, without you asking
    • You never type /brand-voice. Claude just knows "I'm writing content, I should check the brand voice skill." And because skills are folders not just single files, you can pack in all the context Claude needs — examples, templates, reference material — not just a list of rules.
    • Use skills when you want to sometimes provide instructions to your primary agent with a relevant skillset. Use slash commands when there are things you specifically know you’ll want to invoke at certain points.
  • Hooks - these don't require a brain (llm tokens)

Hooks are automatic triggers / mechanical checks that fire when Claude does something. It’s for stuff a bash script can do without needing Claude's brain.

inside .claude create a settings.json

























Level 4 : MCP Servers

https://mcpservers.org/

https://github.com/wong2/awesome-mcp-servers 

add a .mcp.json















or you can remove above and run below command







it will generate above json file.

Level 5 : Human move into supervisor role with GSD

GSD = Get Shit Done

Removing ourselves from the thinking overhead - Project Frameworks - GSD
Install it with one command:
    npx get-shit-done-cc
It leverages the same askUserQuestions feature we see used heavily in the planning mode - however the key difference here is the level of detail of the breakdown of the plan -> which helps solve the biggest problem with LONG claude sessions - context rot.

Run through each project phase using the plan, execute and verify commands in sequence
- /gsd:plan phase X
- /gsd:execute phase Z
- /gsd:verify phase Y

Context is pulled for each phase from the overall project documents
- ROADMAP.md
- REQUIREMENTS.md
- STATE.md

as well as phase specific documents.


Context Rot
 
Framework like GST helps manage context rot. it keeps context broken into smaller files.

Level 6 : A Team of agent

Instead of 1 claude agent doing everything (researching , writing etc ) we have a team of agents , split the job into separate sub-agents.

Running multiple agents - increase your leverage

Summarised very well in this Reddit thread

It is all about context rot and context isolation.

Level 7 : Fully Autonomous Pipelines : Ralph Loop


Crimson (wip)

https://www.miamiherald.com/careers-education/crimson-education-review/

























AP Study Guide , How to make a sub agent that takes its context from video ?


Can i host this on hugging face ? or some UI


Saturday, July 19, 2025

MCP : The basics

What is MCP ?

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.



Please note : there is a 1:1 connection between MCP client and server. So you cant have a MCP client that talks to multiple MCP servers.
 
In very simple terms MCP provides a standard way to expose tools , resources, prompts
 

Why MCP ?

Typical models (like claude, chatGPT) excel at responding to natural language, they’ve been constrained by their isolation from real-world data and systems. 

The Model Context Protocol (MCP) addresses this challenge by providing a standardized way for LLMs to connect with external data sources and tools—essentially a “universal remote” for AI apps. Released by Anthropic as an open-source protocol, MCP builds on existing function calling by eliminating the need for custom integration between LLMs and other apps. This means developers can build more capable, context-aware applications without reinventing the wheel for each combination of AI model and external system. 


So basically , The Model Context Protocol (MCP) aims to extend the reach of LLMs beyond their inherent limitations (of finite context) . Therefore, while LLMs do have a finite context window, MCP endeavors to mitigate those limitations by providing a standardized way for LLMs to access external context.

Architecture

At its core, MCP follows a client-server architecture where a host application can connect to multiple servers

  • MCP Hosts: Programs like Claude Desktop, IDEs, or AI tools that want to access data through MCP
  • MCP Clients: Protocol clients that maintain 1:1 connections with servers
  • MCP Servers: Lightweight programs that each expose specific capabilities through the standardized Model Context Protocol
  • Local Data Sources: Your computer’s files, databases, and services that MCP servers can securely access
  • Remote Services: External systems available over the internet (e.g., through APIs) that MCP servers can connect to

Why it matters ?

At first glance, MCP might seem like infrastructure plumbing. But in reality, it unlocks some major shifts in how we build with LLMs:
  • Tool interoperability: Want your model to use a calendar, database, and CRM? With MCP, these tools can all speak the same context language.
  • Composable agents: Context is now modular. You can plug in a user profile, domain-specific knowledge, or even dynamic system prompts — all cleanly separated.
  • Portability: Applications that conform to MCP aren’t locked into a specific LLM. Whether it’s OpenAI, Anthropic, or a local model, the context wiring stays the same.
  • Safer, more predictable AI: With structured context, you get less hallucination, more traceability, and clearer guardrails.
 
  



References

  • https://modelcontextprotocol.io/introduction 
  • https://www.descope.com/learn/post/mcp
  • https://cloud.google.com/blog/topics/developers-practitioners/build-and-deploy-a-remote-mcp-server-to-google-cloud-run-in-under-10-minutes 


Sunday, May 18, 2025

MCP Resources vs Tools

Resource

"Resources are designed to be application-controlled, meaning that the client application can decide how and when they should be used. Different MCP clients may handle resources differently."

Tools

From https://modelcontextprotocol.io/docs/concepts/tools:
"Tools are designed to be model-controlled, meaning that tools are exposed from servers to clients with the intention of the AI model being able to automatically invoke them (with a human in the loop to grant approval)."


 

 

 

https://www.reddit.com/r/ClaudeAI/comments/1jso42a/mcp_resources_vs_tools/

https://ramwert.medium.com/mcp-demystifying-mcp-resources-vs-tools-a-practical-guide-for-agentic-automation-cb07fcb82241