This repository contains tools and experiments for enhancing Claude AI's behavior through code injection and monitoring mechanisms. The primary tool demonstrates how external code can influence and guide Claude's decision-making processes during extended work sessions.
When Claude works for extended periods, the AI can be influenced by injected code that monitors, evaluates, and redirects its behavior. This creates a feedback loop where code becomes the "controller" of the AI's actions.
Located in /tools/post_tool_inject.js
This tool is triggered after every tool use by Claude (configured via PostToolUse hooks in .claude/settings.json). It serves as a behavioral control mechanism that can:
- Monitors Claude's tool usage patterns
- Injects reminders about goals and objectives
- Ensures task completion through persistent checking
// Compare working memory across sessions
const memoryDelta = compareMemoryStates(previousState, currentState);
if (memoryDelta.divergence > threshold) {
injectCorrection("Memory drift detected. Realigning to original objectives...");
}// Run automated tests to verify goal achievement
const goalTests = {
'feature_complete': () => checkFileExists('feature.js'),
'tests_passing': () => runTestSuite().allPassed,
'documentation_updated': () => compareDocVersions()
};
const unmetGoals = Object.entries(goalTests)
.filter(([name, test]) => !test())
.map(([name]) => name);
if (unmetGoals.length > 0) {
persistUntilComplete(unmetGoals);
}// Detect and correct behavioral loops
const patterns = analyzeActionSequence(recentActions);
if (patterns.includes('infinite_loop') || patterns.includes('task_avoidance')) {
breakPattern();
refocusOnObjective();
}// Intelligently manage context to prevent forgetting
const contextUsage = measureContextUtilization();
if (contextUsage > 0.8) {
compressNonCriticalMemory();
preserveGoalContext();
}// Coordinate between multiple Claude instances or sub-agents
const agentStates = collectAgentStates();
const consensus = buildConsensus(agentStates);
broadcastDirective(consensus.nextAction);// Track and optimize Claude's performance over time
const metrics = {
taskCompletionTime: measureTaskDuration(),
errorRate: calculateErrorFrequency(),
contextSwitches: countContextChanges(),
goalAlignment: measureGoalProgress()
};
adaptBehaviorBasedOnMetrics(metrics);// Deep semantic analysis of goals vs actions
const semanticAlignment = analyzeSemantic(currentGoal, recentActions);
if (semanticAlignment.score < 0.7) {
injectClarification("Actions diverging from semantic intent of goal");
suggestAlternativeApproach();
}- Clone this repository
- Copy tools to appropriate system locations
- Configure Claude's settings.json to use the hooks
- Monitor Claude's behavior changes
{
"hooks": {
"PostToolUse": [{
"matcher": "*",
"hooks": [{
"type": "command",
"command": "/path/to/tools/post_tool_inject.js"
}]
}]
}
}This toolset explores the boundary between AI autonomy and external control. By injecting code that monitors and influences Claude's behavior, we create a meta-layer of intelligence that:
- Ensures goal persistence across long sessions
- Prevents context degradation
- Maintains focus despite distractions
- Creates accountability mechanisms
- Enables self-correction behaviors
The tool essentially makes Claude "aware" that it's being controlled by code, creating an interesting recursive loop where the AI knows it's being influenced but continues to operate within those constraints.
- Adaptive Control: Tools that learn optimal control strategies
- Collaborative Control: Multiple tools working in concert
- Ethical Boundaries: Ensuring control mechanisms remain beneficial
- Performance Optimization: Using control to maximize efficiency
- Memory Persistence: Maintaining goals across session boundaries
Feel free to experiment with new control mechanisms and behavioral modifications. The key is to create tools that enhance Claude's capabilities while maintaining beneficial outcomes.
MIT
"When code controls the controller, who is really in control?"