commandby lancejames221b
Recent
Show recently used hAIveMind commands with outcomes and pattern analysis
Installs: 0
Used in: 1 repos
Updated: 1d ago
$
npx ai-builder add command lancejames221b/recentInstalls to .claude/commands/recent.md
# recent - Command History & Pattern Analysis
## Purpose
Intelligent command history system that shows your recently used hAIveMind commands with success rates, execution times, and pattern analysis to help optimize your workflow and identify improvement opportunities.
## When to Use
- **Performance Analysis**: Review command success rates and execution times
- **Workflow Optimization**: Identify patterns in your command usage for efficiency improvements
- **Troubleshooting**: Analyze recent failures to identify recurring issues
- **Learning**: Understand your usage patterns and discover optimization opportunities
- **Audit Trail**: Review recent actions for compliance or debugging purposes
- **Pattern Recognition**: Discover successful command sequences for future use
## Syntax
```
recent [limit]
```
## Parameters
- **limit** (optional): Number of recent commands to display (default: 10, max: 50)
- Examples: `recent 5`, `recent 20`, `recent 50`
## Intelligent Analysis Features
### Usage Pattern Recognition
- **Command Sequences**: Identifies common command patterns and their success rates
- **Timing Analysis**: Analyzes time between commands to identify workflow efficiency
- **Success Correlation**: Shows which command combinations lead to best outcomes
- **Context Awareness**: Correlates command usage with project types and situations
- **Trend Analysis**: Identifies changes in usage patterns over time
### Performance Insights
- **Execution Time Tracking**: Shows how long each command took to complete
- **Success Rate Analysis**: Tracks which commands succeed most often
- **Error Pattern Detection**: Identifies recurring failure patterns
- **Efficiency Metrics**: Calculates workflow efficiency and suggests improvements
- **Comparative Analysis**: Compares your patterns with collective best practices
## Real-World Examples
### Basic Recent Command History
```
recent
```
**Result**: Last 10 commands with timestamps, success status, execution times, and context
### Extended History Analysis
```
recent 25
```
**Result**: Last 25 commands with detailed pattern analysis, success trends, and optimization suggestions
### Quick Recent Check
```
recent 5
```
**Result**: Last 5 commands with focus on immediate recent activity and quick insights
## Expected Output
### Standard Recent Commands View
```
ā±ļø Recent Command History - 2025-01-24 14:30:00
š Usage Summary (Last 10 commands):
ā³ Total Commands: 10
ā³ Success Rate: 90% (9 successful, 1 failed)
ā³ Average Execution Time: 1.8 seconds
ā³ Most Used: hv-status (3x), remember (2x), hv-query (2x)
š Recent Commands:
1. ā hv-status --detailed 14:28:45 1.2s general
ā³ Success: Checked collective health, found 2 agents with high response times
ā³ Context: Routine health monitoring
ā³ Follow-up: Led to investigation of performance issues
2. ā hv-broadcast "Database optimization completed" infrastructure info 14:25:30 0.8s infrastructure
ā³ Success: Notified 12 agents about database improvements
ā³ Context: Completing database optimization project
ā³ Impact: Team aware of performance improvements
3. ā remember "Database query optimization reduced response time by 60%" infrastructure 14:23:15 0.5s documentation
ā³ Success: Documented optimization results for future reference
ā³ Context: Preserving successful optimization techniques
ā³ Tags: database, optimization, performance
4. ā hv-delegate "Monitor database performance metrics" monitoring 14:20:45 2.1s task_assignment
ā³ Success: Assigned monitoring task to monitoring specialists
ā³ Context: Ensuring optimization results are tracked
ā³ Assigned to: monitoring-agent, grafana-agent
5. ā hv-query "database performance optimization techniques" 14:18:30 3.2s research
ā³ Success: Found 8 relevant memories with optimization strategies
ā³ Context: Researching before implementing database changes
ā³ Results: Query indexing, connection pooling, cache strategies
6. ā hv-sync --force 14:15:20 timeout system_maintenance
ā³ Failed: Timeout during configuration synchronization
ā³ Context: Attempting to update agent configurations
ā³ Error: Network connectivity issues with 3 agents
ā³ Resolution: Retry succeeded after network stabilized
7. ā recall "database issues last 7 days" incidents 14:12:10 1.9s investigation
ā³ Success: Retrieved 4 recent database-related incidents
ā³ Context: Understanding recent database problems before optimization
ā³ Findings: Connection pool issues, slow query patterns
8. ā hv-status 14:10:05 0.9s general
ā³ Success: Quick health check before starting database work
ā³ Context: Verifying system stability before changes
ā³ Status: All systems operational, ready for optimization
9. ā help hv-delegate 14:08:30 0.3s learning
ā³ Success: Reviewed delegation syntax and best practices
ā³ Context: Learning proper task delegation techniques
ā³ Outcome: Better understanding of specialist assignment
10. ā remember "Starting database optimization project" infrastructure 14:05:15 0.4s documentation
ā³ Success: Documented project initiation for tracking
ā³ Context: Beginning systematic database performance improvement
ā³ Project: Database optimization initiative
š Pattern Analysis:
šÆ Successful Command Sequences (Last 24 hours):
1. help ā hv-query ā hv-delegate ā hv-broadcast (Success Rate: 100%, Used 3 times)
ā³ Pattern: Learn ā Research ā Assign ā Communicate
ā³ Average Duration: 8.2 minutes
ā³ Effectiveness: High - leads to well-informed actions
2. hv-status ā recall ā remember (Success Rate: 95%, Used 4 times)
ā³ Pattern: Check ā Research ā Document
ā³ Average Duration: 4.1 minutes
ā³ Effectiveness: High - good for investigation workflows
3. remember ā hv-broadcast ā hv-delegate (Success Rate: 90%, Used 2 times)
ā³ Pattern: Document ā Communicate ā Assign
ā³ Average Duration: 3.8 minutes
ā³ Effectiveness: Good - effective for sharing and follow-up
ā±ļø Timing Patterns:
ā³ Average time between commands: 3.2 minutes
ā³ Fastest sequence: help ā hv-query (30 seconds)
ā³ Longest gap: 15 minutes (between investigation and action)
ā³ Most efficient hour: 14:00-15:00 (current session)
š Performance Insights:
ā³ Commands with <1s execution time: 60% (very efficient)
ā³ Commands with >3s execution time: 20% (investigate optimization)
ā³ Network-dependent commands: 30% (consider local caching)
ā³ Documentation commands: 30% (good knowledge preservation)
š” Optimization Recommendations:
1. šÆ Workflow Efficiency
ā³ Your "help ā research ā act" pattern is highly effective (100% success)
ā³ Consider standardizing this approach for complex tasks
ā³ Time savings: ~15% by following proven patterns
2. ā” Command Performance
ā³ hv-query commands taking >3s - consider more specific search terms
ā³ hv-sync timeout suggests network optimization needed
ā³ Batch similar commands to reduce context switching
3. š Documentation Habits
ā³ Good documentation frequency (30% of commands)
ā³ Consider adding more context tags to remember commands
ā³ Document failed attempts for collective learning
4. š Pattern Optimization
ā³ Your command sequences show good planning and follow-through
ā³ Consider using workflows command for complex multi-step procedures
ā³ Share successful patterns with team via hv-broadcast
š Comparative Analysis:
ā³ Your success rate (90%) is above collective average (87%)
ā³ Your documentation rate (30%) is excellent (collective: 18%)
ā³ Your command diversity shows good tool utilization
ā³ Time between commands suggests thoughtful execution
šÆ Recommended Next Steps:
1. Continue current documentation practices - they're excellent
2. Investigate hv-sync timeout issues for better reliability
3. Consider using workflows for your successful command patterns
4. Share your effective patterns with team via hv-broadcast
```
### Extended History with Deep Analysis
```
ā±ļø Extended Command History Analysis - Last 25 Commands
š Comprehensive Usage Statistics:
ā³ Total Commands: 25 (last 4 hours)
ā³ Success Rate: 88% (22 successful, 3 failed)
ā³ Average Execution Time: 2.1 seconds
ā³ Command Diversity: 7 different commands used
ā³ Most Active Period: 14:00-15:00 (12 commands)
š Command Frequency Analysis:
1. hv-status: 8 uses (32%) - Avg: 1.1s, Success: 100%
2. remember: 5 uses (20%) - Avg: 0.6s, Success: 100%
3. hv-query: 4 uses (16%) - Avg: 2.8s, Success: 75%
4. hv-broadcast: 3 uses (12%) - Avg: 0.9s, Success: 100%
5. hv-delegate: 2 uses (8%) - Avg: 2.3s, Success: 100%
6. recall: 2 uses (8%) - Avg: 1.7s, Success: 100%
7. hv-sync: 1 use (4%) - Avg: timeout, Success: 0%
š Trend Analysis (4-hour window):
ā³ Command frequency increasing (2 ā 5 ā 8 ā 10 per hour)
ā³ Success rate stable around 88-92%
ā³ Documentation rate increasing (good trend)
ā³ Research-heavy period (multiple hv-query commands)
šÆ Advanced Pattern Recognition:
Workflow Pattern: "Database Optimization Project"
Timeline: 14:05 - 14:30 (25 minutes)
Commands: remember ā help ā hv-query ā recall ā hv-delegate ā remember ā hv-broadcast ā hv-status
Success Rate: 87.5% (7/8 successful)
Outcome: Successful database optimization with team coordination
Pattern Breakdown:
1. Project initiation (remember) ā
2. Learning phase (help) ā
3. Research phase (hv-query) ā
4. Historical analysis (recall) ā
5. Task assignment (hv-delegate) ā
6. Documentation (remember) ā
7. Team communication (hv-broadcast) ā
8. Verification (hv-status) ā
Success Factors:
ā Systematic approach with clear phases
ā Good balance of research and action
ā Proper documentation at key points
ā Team communication and coordination
ā Verification of outcomes
šØ Failure Analysis:
Failed Command: hv-sync --force (14:15:20)
ā³ Error Type: Network timeout
ā³ Context: System maintenance during active work
ā³ Impact: Delayed workflow by 5 minutes
ā³ Resolution: Retry after network stabilization
ā³ Prevention: Check network status before sync operations
Failed Commands Pattern:
ā³ 3 failures out of 25 commands (12% failure rate)
ā³ All failures were network-related (hv-sync, hv-query timeouts)
ā³ Failures clustered around 14:15-14:20 (network issue period)
ā³ No command logic or parameter errors
šÆ Optimization Opportunities:
1. Network Reliability (Priority: High)
ā³ 100% of failures were network-related
ā³ Consider network diagnostics before critical operations
ā³ Implement retry logic for network-dependent commands
ā³ Monitor network status: hv-status --network
2. Query Optimization (Priority: Medium)
ā³ hv-query commands averaging 2.8s (above optimal)
ā³ Use more specific search terms to reduce search time
ā³ Consider caching frequently accessed information
ā³ Break complex queries into smaller, focused searches
3. Workflow Standardization (Priority: Low)
ā³ Your successful patterns could be formalized into workflows
ā³ Database optimization pattern highly successful (87.5%)
ā³ Consider creating custom workflow templates
ā³ Share successful patterns with collective via documentation
š Performance Benchmarking:
Your Performance vs. Collective Averages:
ā³ Success Rate: 88% (You) vs 87% (Collective) - Above Average ā
ā³ Execution Time: 2.1s (You) vs 1.8s (Collective) - Slightly Slower
ā³ Documentation Rate: 20% (You) vs 18% (Collective) - Above Average ā
ā³ Command Diversity: 7 types (You) vs 5.2 (Collective) - Above Average ā
Strengths:
ā Excellent documentation habits
ā Good command diversity and tool utilization
ā Systematic approach to complex tasks
ā Strong success rate despite network challenges
Improvement Areas:
ā³ Network connectivity optimization
ā³ Query efficiency improvement
ā³ Command execution speed optimization
```
## Advanced Analysis Features
### Pattern Learning and Prediction
- **Success Prediction**: Predicts likelihood of command success based on context and history
- **Workflow Recognition**: Automatically identifies recurring workflow patterns
- **Optimization Suggestions**: Recommends command sequence improvements
- **Timing Optimization**: Suggests optimal timing for command execution
- **Context Correlation**: Links command success to environmental factors
### Collective Intelligence Integration
- **Benchmark Comparison**: Compares your patterns with collective best practices
- **Success Rate Analysis**: Shows how your success rates compare to other agents
- **Pattern Sharing**: Identifies successful patterns worth sharing with collective
- **Learning Opportunities**: Suggests areas for improvement based on collective data
- **Best Practice Adoption**: Recommends proven patterns from high-performing agents
## Performance Considerations
- **History Storage**: Command history stored locally and in collective memory
- **Analysis Speed**: Pattern analysis completed in <500ms for typical history sizes
- **Memory Usage**: Optimized storage of command metadata and outcomes
- **Privacy**: Personal command history separate from collective analytics
- **Retention**: Command history retained for 30 days, patterns preserved longer
## Related Commands
- **For workflow optimization**: Use `workflows` to see formalized patterns
- **For command help**: Use `help <command>` to understand failed commands
- **For examples**: Use `examples` to see successful usage patterns
- **For suggestions**: Use `suggest` to get recommendations based on recent patterns
- **For analytics**: Use `help analytics` to see broader usage statistics
## Troubleshooting Recent Command Analysis
### Missing Command History
```
ā No recent commands shown or incomplete history
š” Possible Causes:
1. New installation - history builds over time
2. Command tracking not enabled
3. History storage issues
4. Use commands to build history for analysis
```
### Inaccurate Success Rates
```
ā ļø Success rates don't match actual experience
š” Troubleshooting:
1. Success tracking depends on command completion detection
2. Network timeouts may be misclassified
3. Manual verification of recent command outcomes
4. Report discrepancies for system improvement
```
### Performance Analysis Issues
```
š Pattern analysis taking too long or failing
š” Resolution Steps:
1. Reduce history limit for faster analysis
2. Check system resources during analysis
3. Clear analysis cache if available
4. Report performance issues for optimization
```
## Best Practices for Command History Analysis
- **Regular Review**: Check recent commands daily to identify patterns and issues
- **Learn from Failures**: Analyze failed commands to prevent future issues
- **Optimize Patterns**: Use successful patterns as templates for future work
- **Share Insights**: Document and share successful patterns with team
- **Track Improvements**: Monitor how changes affect success rates and efficiency
- **Use for Planning**: Let recent patterns inform future workflow planning
---
**Continuous Improvement**: The recent command analysis system learns from your usage patterns to provide increasingly accurate insights and optimization recommendations, helping you become more effective over time.Quick Install
$
npx ai-builder add command lancejames221b/recentDetails
- Type
- command
- Author
- lancejames221b
- Slug
- lancejames221b/recent
- Created
- 4d ago