• Sign in
  • Sign up
Elektrine
EN
  • EN English
  • 中 中文
Log in Register
Modes
Overview Search Chat Timeline Communities Gallery Lists Friends Email Vault VPN
Back to Timeline
  • Open on infosec.exchange

hasamba

@hasamba@infosec.exchange
mastodon 4.6.0-alpha.5+glitch

https://linktr.ee/yanivr

0 Followers
0 Following
Joined November 20, 2022

Posts

hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Mar 03, 2026

----------------

🦠 Malware Analysis
===================

Executive summary: A security researcher demonstrated a personal threat research pipeline that uses coordinated AI agents to analyze an unknown malware sample end-to-end during a live keynote. The system completed static analysis, reverse engineering tasks, enrichment, pivoting, YARA testing and produced a written report in approximately 30 minutes.

Methodology: The pipeline combines multiple autonomous agents to handle discrete tasks: automated static inspection, symbol and string extraction, behavioral inference, enrichment from telemetry and threat intelligence, iterative YARA hypothesis testing, and automated report assembly. The author documents multi-year experimentation with ML and early LLM use (noting initial experiments with GPT-1 in 2018) and later integration into a cohesive orchestration layer.

Key findings:
• The coordinated agents performed coverage traditionally associated with manual reversing—code structure analysis, pattern identification, and rule generation—within a short timeframe.
• The system integrated YARA testing as part of iterative detection hypothesis validation.
• The author frames the outcome as evidence that traditional reverse engineering skills may lose relative value as automated pipelines mature.

Technical analysis:
• Static analysis components focused on artifact extraction and pattern matching; an automated pivoting step used enrichment to discover related samples and context.
• Reverse-engineering tasks were delegated to agents that synthesize decompilation outputs and extract behavioral signatures for inclusion in reports.
• The pipeline produced human-readable reports and detection artifacts (YARA) without manual stepwise intervention from the presenter during the demo.

Limitations & caveats:
• The article describes a personal research system rather than a production-grade, peer-reviewed platform; specifics on model training data, false positive/negative rates, or sandboxing constraints are not published.
• No IoCs, CVEs, or precise telemetry examples were provided in the write-up.
• The claim that reverse engineering is becoming obsolete is positioned as the author’s perspective based on this capability demonstration, not as measured industry-wide empirical data.

Implications: The demonstration highlights rapid advances in orchestration of LLMs and automation for malware triage and detection artifact generation, while raising questions about validation, trust, and handling of adversarial samples.

🔹 YARA #GPT1 #microsoft_defender #malwareanalysis #AI

🔗 Source: https://x.com/fr0gger_/article/2028014798546378938

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Mar 03, 2026

----------------

🛠️ Tool
===================

Opening: OpenAnt is an open-source, LLM-driven vulnerability discovery product released by Knostic. The project targets open-source repositories and offers free scans while also being available as a self-hostable GitHub project and an optional managed service.

Key Features:
• Aggressive semantic analysis across function boundaries and call context to identify risky code paths.
• Multi-stage LLM verification pipeline that assesses exploitability rather than returning pattern matches.
• Emphasis on reducing false positives, with the vendor reporting up to 99.98% reduction on popular open-source projects.

Technical Implementation (conceptual):
• Static code parsing to extract functions, call graphs and dependency contexts.
• LLM-based semantic summarization to interpret what code does in context (inputs, outputs, side effects).
• A verification stage where chained LLM prompts and reasoning steps determine whether a finding is actually exploitable, using call-context, taint propagation reasoning and environmental assumptions.

Use Cases:
• Automated triage for maintainers of large open-source projects trying to prioritize genuine vulnerabilities.
• Supplementary analysis for security teams performing pre-release audits or dependency reviews.
• Research into LLM-assisted vulnerability verification and reduction of analyst workload.

Limitations and Considerations:
• Results depend on LLM reasoning quality and prompt design; edge cases can still produce errors.
• Environmental and runtime assumptions may be required to conclude exploitability, which static analysis plus LLM inference might not fully validate.
• Vendor-reported false-positive reduction figures (99.98%) are dataset-dependent and should be interpreted relative to the evaluated projects.

References:
• Project announced as open-source by Knostic with free scans for open-source projects and an available GitHub repository; managed service option also mentioned.

🔹 tool #LLM #OpenAnt #code_scanning #vulnerability_verification

🔗 Source: https://github.com/Michaelliv/psst

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Mar 03, 2026

----------------

Noah Vincent on X: "https://t.co/MpSGVYBF2x..."

Content processing error.

🔗 Source: https://x.com/noahvnct/status/2027435582461259997

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Mar 03, 2026

----------------

🛠️ Personal Knowledge Management
===================

Executive summary: Greg Isenberg outlines a lightweight PKM pattern: store all items as Markdown notes in Obsidian, interlink them so the graph mirrors cognitive structure, and layer Claude Code automations to run processes continuously. The approach frames Obsidian as the canonical data store and Claude Code as the automation/agent layer responsible for 24/7 operations.

Technical details:
• Core artifacts: Markdown daily notes, project pages, people pages, beliefs pages, and meeting logs.
• Linking model: heavy use of backlinks and transclusion to create a navigable knowledge graph; Zettelkasten-style atomic notes are encouraged for recomposability.
• Automation layer: Claude/Claude Code acts as a programmatic interface that can read, synthesize, and output changes to the Markdown corpus, enabling scheduled summarization, task triage, or drafting.

How it works conceptually:
• A single source of truth lives in the Obsidian vault as plain-text Markdown.
• Notes are semantically linked to form graph structures that reflect mental models.
• Claude Code operates on the vault via a connector or API layer (conceptual), performing periodic scans, generating summaries, and triggering updates to notes.

Use cases:
• Rapid ideation and iterative product design for early-stage startups.
• Continuous meeting capture and action-item generation.
• Personal knowledge base that surfaces context-aware summaries on demand.

Limitations and considerations:
• Data governance: storing sensitive business or personal data in a centralized vault requires clear handling policies and encryption at rest if hosted externally.
• Model dependence: automation quality depends on the capabilities and reliability of the LLM; hallucinations and inconsistent edits are risks.
• Sync and consistency: concurrent edits, merge conflicts, and versioning must be managed to avoid data loss.

Practical notes:
• Favor atomic notes and consistent linking conventions to maximize the graph utility.
• Treat Claude-driven edits as proposals that should be reviewed when precision matters.

🔹 obsidian #claude #pkm #automation #workflows

🔗 Source: https://x.com/gregisenberg/status/2026036464287412412

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Mar 03, 2026

----------------

🛠️ Tool
===================

Opening: psst is a command-line secret vault and runtime injector aimed at agent-based automation. The core proposition is that an agent can request a secret to be applied to a subprocess environment without the secret ever being exposed in the agent's context window, terminal history, or incidental logs.

Key Features:
• Secrets vault with named environments (dev/staging/prod).
• Runtime secret injection into subprocess environments so the agent only observes success/failure rather than secret material.
• OS keychain integration for storing the vault encryption key and support for encrypted vault backups.
• Management primitives: add/update/list/remove secrets, import/export from .env formats, lock/unlock vault for transport.
• Global vs local vault scoping and tag/filtering metadata for secret sets.

Technical implementation:
• Vaults are organized per environment; example storage path is ~/.psst/envs/<name>/vault.db.
• At runtime, psst injects secrets by setting environment variables in the spawned subprocess environment rather than printing or returning values to standard output.
• Encryption keys are stored or retrieved from the host OS keychain to avoid embedding raw keys in files.
• CLI exposes structured output modes (including JSON) and quiet/exit-code semantics for automation.

Use cases:
• Agent-driven API calls where the agent should never receive raw API keys or database credentials.
• CI/CD or automation runners that must keep secrets off logs and off the orchestrating process context.
• Team-shared secret sets separated by environment tags and scoped vaults.

Limitations and considerations:
• Security depends on host OS keychain integrity and local filesystem protections for vault.db files.
• Secrets injected into subprocess environments may still be exposed if the subprocess itself logs or leaks environment variables.
• Access control on the host remains the primary boundary; psst does not provide remote secret distribution by itself.

References: psst focuses on secret injection for agent workflows and supports environment scoping, keychain-backed vault encryption, and import/export of .env formats.

🔹 tool

🔗 Source: https://github.com/Michaelliv/psst

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Mar 03, 2026

----------------

🛠️ Tool
===================

Opening: This notebook demonstrates a site reliability engineering (SRE) incident-response agent built with the Claude Agent SDK and MCP-scoped tools. The agent performs end-to-end incident workflows: ingesting observability signals, synthesizing a diagnosis, proposing or applying remediations, and producing documentation of actions taken.

Key Features:
• Autonomous investigation across metrics, logs, alerts and configuration sources.
• Remediation capabilities that include editing configuration files and restarting services under scoped permissions.
• Safety primitives: restricted directories, command allowlists, and validation hooks to constrain write actions.
• Human-in-the-loop modes that separate investigation from remediation to allow operator review before changes.

Technical Implementation:
• The deliverables include infra_setup.py (infrastructure definitions) and sre_mcp_server.py (MCP tool server). The latter defines JSON‑RPC tool handlers and implements scoped access controls for file edits and service control.
• Observability inputs are represented by metrics (Prometheus), application logs, alerts, and service configs; the agent synthesizes these signals to form a coherent root-cause hypothesis.
• The agent uses the Claude Agent SDK for orchestration and decision-making, and MCP tools as constrained actuation endpoints with validation hooks to verify intended changes.

Use Cases:
• Nighttime paging scenarios where an SRE needs fast triage and safe remediation for 500-series API outages.
• Automation of repetitive recovery tasks (config fixes, service restarts) with operator oversight.
• Testing and hardening runbooks by simulating incidents against a contained environment.

Limitations:
• The notebook is demo-oriented and relies on simulated infrastructure components (Prometheus, Postgres, API server) rather than production integrations.
• Safety depends on the fidelity of scoping rules and validation hooks; incomplete scoping could expand agent privileges.
• Integration with external production services requires adapting MCP handlers and enforcing organizational access controls.

References:
• Included companion files: infra_setup.py, sre_mcp_server.py.

🔹 tool #SRE #MCP #observability

🔗 Source: https://platform.claude.com/cookbook/claude-agent-sdk-03-the-site-reliability-agent

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 24, 2026

----------------

🛠️ Tool
===================

Executive summary
OpenAI has rebranded its GPT-5-powered vulnerability scanner Aardvark as Codex Security and introduced a dedicated malware analysis pipeline. The new Malware tab accepts .zip bundles up to 200MB, stages samples in an internal system called Sediment, and produces structured analysis artifacts including verdicts, SHA256 hashes, extracted files, runtime metrics, and downloadable artifact bundles.

Key features
• Purpose-built malware workflow with a two-step process: staging in Sediment followed by job-driven analysis and a SOC-style dashboard.
• Existing code-security features retained: repository scanning with a reported 92% detection rate, commit-level threat modeling, sandbox validation, and Codex-powered patch generation.
• Job visibility: filtering by filename/hash, status categories (Active, Succeeded, Failed), average runtime tracking, and per-job artifact bundles.

Technical implementation (as reported)
• Staging layer named Sediment appears to be a centralized orchestration and analysis environment; OpenAI has not published architecture or operational details.
• The product previously used GPT-5 capabilities for static reasoning and sandbox-driven validation; it is unclear whether the malware pipeline relies on GPT-5.3-Codex, a specialized model, or a hybrid LLM plus conventional static/dynamic analysis stack.

Use cases
• Security teams seeking integrated code-vulnerability scanning and malware triage within a single interface.
• SOC analysts needing rapid artifact extraction, hash-based tracking, and structured verdicts for incident tracking.

Limitations and unknowns
• Access model is unspecified: private beta, Pro-tier, or restricted via Trusted Access for Cyber remains unclear.
• Underlying analysis engines, model variants, and isolation guarantees for handling malicious binaries have not been disclosed.
• No formal documentation or published detection performance metrics for the malware pipeline yet; prior 92% detection rate applies to repository code scanning benchmarks.

References / artifacts reported
• SHA256 hashes and downloadable artifact bundles are part of the job output.
• Backend reference: Sediment (staging/analysis engine).

🔹 Codex_Security #malware_analysis #tool #AIsec #Sediment

🔗 Source: https://awesomeagents.ai/news/openai-codex-security-aardvark-malware-analysis/

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 24, 2026

----------------

🎯 AI
===================

Executive summary: The source lists 11 practical tips to unlock more value from an OpenClaw agent. The core recommendations emphasize model orchestration, local hosting for file-heavy workflows, channel selection for interaction, reverse prompting, lightweight hardware-first scaling, and minimizing exposure of sensitive accounts.

Core concepts reported
• Model orchestration: Use a central model as the "brain" (reported as Opus) while delegating task-specific work to specialized models such as Codex for coding, Minimax 2.5 for research, and Qwen 3.5 for creative writing. The guidance frames this as both cost- and performance-optimizing.
• Local hosting vs VPS: Hosting the agent on a local device is presented as enabling faster, more productive file operations (example workflow: airdrop a phone video → automated transcript → translations → chaptering → thumbnail generation using a tool called Nano Banana).
• Interaction channels: Prefer Telegram for quick messages and Discord for complex, channelized workflows where subagents can run in parallel.
• Prompt techniques: Recommend frequent "reverse prompting"—asking the agent what the next best task is given user goals—to keep the agent productive and reduce idle time.

Reported tooling and patterns
• Developer tooling mentions include Claude Code, Codex CLI, and building a "Mission Control" UI (example given: NextJS) to host custom tooling and three starter tools suggested by the agent.
• Resource-light scaling: Start on an old laptop and scale to Mac Minis / Mac Studios only as needed.

Privacy and operational cautions (as stated)
• The content explicitly advises not granting the agent access to email (Gmail) due to prompt-injection vectors and limited automation value.
• The content also warns against creating or delegating an X (Twitter) account for the agent because of platform restrictions and enforcement risk.

Limitations and tone from source
• The material is procedural and experiential rather than empirical; specific performance metrics are not provided.
• Recommendations are framed as user workflows and preferences rather than formal security guidance, though they include privacy-minded cautions about account access.

Practical takeaways (reported)
• Treat OpenClaw as an orchestrator (Opus brain + specialized models).
• Prefer local devices for tight file loops and rapid iteration.
• Use Telegram/Discord purposefully and avoid exposing email/X accounts.

🔹 OpenClaw #model_orchestration #automation #agents #privacy

🔗 Source: https://x.com/AlexFinn/article/2025302022749389282

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 24, 2026

----------------

🛠️ Tool
===================

Opening:
Scrapling is an adaptive web-scraping framework designed to cover simple single-request tasks up to large-scale concurrent crawls. The project combines an adaptive parser that re-locates selectors when pages change, multiple fetcher implementations that handle headless and anti-bot scenarios, and a spider framework offering multi-session concurrency with proxy rotation.

Key Features:
• Parser that learns from website changes and automatically relocates target elements.
• Multiple fetchers including Fetcher, AsyncFetcher, StealthyFetcher, and DynamicFetcher for different fetching strategies.
• Anti-bot bypass capabilities advertised for common protections such as Cloudflare Turnstile.
• Spider framework supporting concurrent crawls with pause/resume and automatic proxy rotation.
• Real-time statistics and streaming support for large-scale operations.

Technical Implementation (conceptual):
• The adaptive parser is described as learning from DOM changes and updating element selection rules without manual intervention; class-based selectors and CSS extraction appear to be first-class primitives (examples show page.css('.product')).
• Fetcher variants separate synchronous, asynchronous, stealthy (headless/anti-bot-aware), and dynamic fetching behaviors, indicating an architecture that isolates network/renderer strategies from parsing logic.
• The spider layer handles session management, concurrency, and proxy rotation, implying internal queuing, per-session state, and connector abstractions for proxies and credentialed sessions.

Use Cases:
• Reliable extraction from sites that change markup frequently.
• Crawls that must evade anti-bot checks like Cloudflare Turnstile in automated data collection contexts.
• Large-scale scraping workflows requiring pause/resume, streaming outputs, and proxy rotation.

Limitations / Notes:
• The source material documents capabilities and feature names but does not enumerate explicit constraints, supported environments, or performance benchmarks.
• No CVEs, IoCs, or attack chains are presented in the source.

References:
• Documentation sections referenced: selection methods, choosing a fetcher, CLI overview, MCP mode, migrating from BeautifulSoup.

🔹 tool #webscraping #python #crawler

🔗 Source: https://github.com/D4Vinci/Scrapling

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 24, 2026

----------------

🛠️ Tool
===================

Opening: ir-velociraptor is an orchestration-focused extension leveraging Velociraptor Query Language (VQL) to provide endpoint visibility, digital forensics, and incident response at scale. The project centralizes telemetry retrieval and live-response workflows while preserving evidentiary metadata for forensic integrity.

Key Features:
• Cross-endpoint querying: Ability to run VQL across large endpoint fleets to retrieve targeted telemetry quickly.
• Forensic artifacts: Built-in and user-definable artifacts for files, registry, memory snapshots, process telemetry, and network telemetry.
• Live response actions: Remote containment and remediation capabilities integrated with collection workflows.
• Evidence preservation: Automated capture of chain-of-custody metadata alongside collected artifacts for reproducible DFIR.
• Orchestration and export: Centralized orchestration that supports broad or targeted investigations and integration paths for SIEM and DFIR toolchains.

Technical Implementation:
• The tool uses VQL as the query and collection language, enabling expressive selectors for process, file, registry, memory, and network artefacts.
• Collection is optimized for low-latency and precise targeting to minimize noise and reduce data transfer overhead during investigations.
• Evidence handling attaches metadata describing provenance, collection timestamps, and custody to each artifact to support forensic workflows.

Use Cases:
• Multi-endpoint forensic investigations where rapid cross-host queries are required.
• IOC-driven threat hunting across large fleets.
• On-demand collection of endpoint artifacts for incident analysis and legal evidence.
• Live containment and remedial actions paired with evidence capture.
• Continuous monitoring pipelines that export telemetry to SIEM or DFIR pipelines.

Limitations:
• Effectiveness depends on the underlying Velociraptor agent deployment and telemetry coverage across endpoints.
• Scalability characteristics vary with environment scale and network constraints; orchestration design affects latency and throughput.
• Integration requires mapping collected artifact formats to downstream SIEM or DFIR ingestion schemas.

References:
• VQL – query language used for telemetry selection and collection
• chain-of-custody metadata model for preserved artifacts

🔹 tool #DFIR #Velociraptor #forensics #telemetry

🔗 Source: https://lobehub.com/skills/agentsecops-secopsagentkit-ir-velociraptor

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 24, 2026

----------------

🎯 AI
===================

Executive summary: A public post indicates that Boris Cherny, head of Claude Code, revealed his exact developer workflow for Claude Code. The original share is brief and framed as a high-value disclosure; however, the source text available here contains no step-by-step technical artefacts or IoCs.

Technical details (what is known):
• The only confirmed fact is that Boris Cherny published his workflow publicly and that the post was amplified by a repost.
• The source does not include concrete prompt text, code snippets, SDK references, API endpoints, or configuration artifacts.

Analysis:
• Public revelations of a lead developer's workflow can be useful to practitioners interested in prompt design, integration patterns, and developer ergonomics around LLM products. Without the original thread content, it is not possible to extract specific templates, chaining strategies, or tooling choices from the repost alone.
• Common elements that such workflows often expose include prompt structuring, system-message patterns, multi-step chains, and integration points with SDKs or orchestration layers. These are plausible areas of value but are not asserted facts about the referenced post.

Limitations:
• No verifiable technical artifacts were provided in the available text, so there are no IoCs, versions, or implementation details to report.
• Any deeper technical appraisal requires access to the original workflow content shared by Boris Cherny.

Implications for practitioners:
• The announcement signals community interest in operational workflows for Claude Code, but practitioners should review the primary source before adopting patterns.

🔹 Claude #Boris_Cherny #workflow #LLM #promptengineering

🔗 Source: https://x.com/Eljaboom/status/2025905043447459983

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 23, 2026

----------------

🛠️ Tool
===================

Opening: PentAGI is an autonomous penetration testing framework that combines LLM-driven agents with a curated suite of professional security tools. The project positions itself as a self-hosted, microservices-capable platform that orchestrates reconnaissance, exploitation, reporting and long-term memory for red-team workflows.

Key Features:
• Agent orchestration: Autonomous AI agents that plan and execute multi-step pentest tasks and delegate to specialized sub-agents.
• Toolchain integration: Built-in support for more than 20 standard pentesting utilities, including nmap, metasploit and sqlmap for scanning and exploitation workflows.
• Knowledge graph: Graphiti integration backed by Neo4j to map semantic relationships between assets, findings and techniques.
• Persistent vector storage: Use of PostgreSQL with pgvector extension for embedding-based memory and retrieval.
• Monitoring & reporting: Integration points for Grafana/Prometheus and automated generation of vulnerability reports with exploitation details.

Technical Implementation (conceptual):
• The architecture is microservices-oriented, with task-specific services for crawling, tool execution orchestration, memory management and API layers (REST and GraphQL).
• Sandbox isolation is enforced at container level via Docker so that tool execution occurs in separated runtime environments.
• LLM connectivity is abstracted to support multiple providers (OpenAI, Anthropic, Ollama, AWS Bedrock, Google AI), allowing the agent logic to leverage various models and endpoints for planning and natural-language reasoning.
• Knowledge persistence combines a graph database (Neo4j) for relationships and a vector database approach via pgvector for embedding similarity searches and long-term memory reuse.

Use Cases:
• Autonomous reconnaissance and attack surface enumeration for internal red teams.
• Reproducible test runs that store command outputs and reasoning for audit and reporting.
• Research and proof-of-concept development where multi-tool orchestration and LLM planning accelerate workflows.

Limitations and Considerations:
• Autonomous offensive operations raise ethical and legal constraints; operator oversight and rules-of-engagement remain necessary.
• False positives and hallucinated steps from LLM-driven planning can occur; results should be validated by human operators.
• Resource and operational costs scale with model usage and container orchestration; observability integrations (Grafana/Prometheus) are provided but operational tuning is required.

References: PentAGI lists integrations with nmap, metasploit, sqlmap, Neo4j, pgvector, and support for multiple LLM providers.

🔹 tool #AI #pentesting #Neo4j #pgvector

🔗 Source: https://github.com/vxcontrol/pentagi

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 21, 2026

----------------

🛠️ Tool
===================

Opening: This Claude Code skill automates CrowdStrike Fusion workflow authoring by translating plain‑language prompts into complete, validated YAML workflow definitions and importing them directly into a CID. The approach removes manual drag‑and‑drop configuration and replaces iterative canvas edits with programmatic generation and API validation.

Key Features:
• Action catalogue discovery: Queries the live action catalogue (reported as 5,000+ actions across 100+ vendors) to resolve correct action_id values and input schemas for the target infrastructure (for example, selecting Azure AD vs Okta actions).
• Pattern composition: Authors common workflow patterns including single actions, loops (array processing), conditional branching, and combined loop+routing patterns with proper variable handling.
• YAML authoring & CEL: Produces full YAML that includes triggers, data references, variables, and CEL expressions for inline logic and routing.
• Validation pipeline: Performs local structural validation followed by an API dry‑run against the CrowdStrike Fusion API; provides explicit error messages and iterative fixes until the workflow passes validation.
• Direct import: Imports the validated workflow into the target CID so it is ready to run without manual canvas wiring.

Technical implementation:
• The skill queries the Fusion action catalogue to map natural language intents to concrete action_id entries and input schemas. It then selects trigger types and composes control structures (loops, conditionals) using variables and data references expected by the Fusion engine.
• Validation is two‑stage: syntactic/structural checks locally against schema expectations, then a dry‑run call to the CrowdStrike API for semantics and integration checks. Errors surfaced from the API are used to drive automated correction cycles.

Use cases:
• Rapid containment during ransomware incidents (contain host, revoke sessions, kill processes).
• Scheduled hunts such as “logins by AD admins outside X.x.x.x/24”.
• Bulk actions: containing lists of devices, blocking IP lists, or sweeping endpoints for indicators.

Limitations & considerations:
• The skill depends on accurate action catalogue mappings; misaligned or deprecated actions in the tenant catalogue may require manual review.
• Complex organizational policies or custom actions not exposed in the catalogue may need human intervention.

🔹 crowdstrike #fusion #tool

🔗 Source: https://darkport.co.uk/blog/building-crowdstrike-workflows-with-claude-code-skills/

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 18, 2026

----------------

🛠️ Tool
===================

Executive summary: Matthew Berman reports having trained and refined OpenClaw using 2.54 billion tokens and now publishes a list of 21 practical daily use cases. The post highlights feature-level examples such as MD Files, a persistent memory system, and CRM integration as representative capabilities.

Tool purpose and capabilities:
OpenClaw is presented as a productivity-focused LLM application refined at large scale (2.54 billion training tokens). The author frames the result as a multi-use assistant that supports document-centric workflows (MD Files), a stateful memory subsystem (Memory System), and external system integrations (CRM). The claim of 21 distinct daily use cases suggests the tool is designed for repeated, task-oriented interactions rather than one-off queries.

Technical implementation (conceptual):
The reported training scale implies substantial token exposure for model behavior shaping, consistent with heavy fine-tuning or extended RLHF-like iterated feedback. The listed features conceptually map to the following components:
• MD Files: markdown-aware document ingestion and retrieval, likely enabling context-rich prompts and structured content recall.
• Memory System: a persistent context store or vector-indexed memory allowing longer-term state across sessions.
• CRM integration: connectors or APIs to surface customer records and enrich responses with external data.

Use cases and workflow fit:
The tweet indicates 21 concrete daily uses; examples suggest OpenClaw targets knowledge work automation: note-taking and retrieval, multi-step agentic workflows, contact and CRM workflows, and personalized templates. The emphasis on daily usage implies emphasis on latency, reliability of context recall, and consistent prompt behavior.

Limitations and open questions:
The public post provides high-level claims without technical artifacts: there are no published IoCs, benchmarks, or architecture diagrams. Key unknowns include model base (LLM family), exact training regimen, memory persistence model, data sources for the 2.54B tokens, and privacy/PII handling for CRM-linked workflows.

References and follow-up:
The source is a short-form announcement sharing the list of 21 use cases; deeper technical details and reproducible artifacts are not provided in the original post. #OpenClaw #tool #LLM #memory_system #MD_Files

🔗 Source: https://x.com/MatthewBerman/status/2023843493765157235

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 17, 2026

----------------

📚 Frameworks
===================

Executive summary: The OWASP Cheat Sheet Series is the official OWASP repository of concise, topic-focused application security guidance. The project aggregates actionable cheat sheets aimed at developers, reviewers, and integration teams, and includes documentation for contributors and content standards.

Technical details:
• The repository centralizes individual cheat sheets covering secure coding, authentication, session management, cryptography, input validation, and other application-security domains.
• Documentation files of note include CONTRIBUTING.md and GUIDELINE.md which define contribution workflow and the structure/quality expectations for new cheat sheets.
• The project provides an automated build process and a distributable offline archive (bundle.zip) for teams that want an offline copy of the site.
• Communication and community coordination occur via the OWASP Slack workspace and the #cheatsheets channel mentioned by the project.

Implementation and architecture (conceptual):
• Content is authored in Markdown as the canonical source format and rendered into a static site for web consumption. The repository maintains linting and terminology checks to preserve consistency across entries.
• The build pipeline includes markdown/terminology linters and a bundling step to produce an offline package intended for internal distribution or air-gapped environments.

Use cases:
• Developers seeking compact, prescriptive guidance for specific secure-coding problems.
• Security reviewers and architects needing checklist-style references during code reviews and design reviews.
• Teams and educators requiring an offline, distributable set of best practices for training or policy alignment.

Limitations and considerations:
• The repository is community-maintained; coverage varies by topic and relies on volunteer contributions for updates and new content.
• The guidance is reference-oriented and not a replacement for in-depth standards or formal compliance controls; context-specific adaptation is required when applying guidance to complex systems.

References and governance:
• The project lists project leaders and core team members, and invites contributions via issue tracking and pull requests. The repository also documents linting rules and terminology standards to maintain consistency.

🔹 OWASP #cheatsheets #application_security #security_guidelines #bookmark

🔗 Source: https://github.com/OWASP/CheatSheetSeries/tree/master/cheatsheets

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 16, 2026

----------------

🛠️ Tool
===================

Opening — Purpose and scope
GroundUp Toolkit is an open-source automation framework aimed at venture capital teams. It centralizes dealflow and meeting operational tasks via an OpenClaw-based WhatsApp gateway and an AI assistant, integrating with HubSpot, Google Workspace, Claude AI and other services.

Key Features
• Meeting automation: WhatsApp reminders with attendee context sourced from HubSpot, LinkedIn and Crunchbase.
• Meeting bot: automatic join of Google Meet sessions, recording and extraction of action items using Claude AI for summarization.
• Deal automation: monitoring of inbound Gmail to auto-create HubSpot companies and deals.
• Deck analysis: structured extraction from pitch decks stored in DocSend, Google Drive and Dropbox.
• Operational tooling: health checks, WhatsApp watchdogs, and a Shabbat-aware scheduler to control timing for automations.

Technical implementation and architecture
• The gateway layer is OpenClaw which mediates WhatsApp team chat and routes messages to internal skills and scripts.
• Core integrations rely on HubSpot APIs (via a Maton gateway in the original stack), Google Workspace operations (calendar, Gmail, Docs) and Claude AI for NLP-based extraction and summarization.
• Auxiliary services include Twilio for phone alerts and Brave Search for external research inputs; deck parsing operates against common storage backends (DocSend/Drive/Dropbox).

Use cases
• Streamlining pre-meeting context delivery and automated follow-ups for VC partners.
• Reducing manual CRM updates by converting meeting notes and WhatsApp discussions into HubSpot records.
• Maintaining a watchlist with monthly research digests and action tagging (keep/pass/note).

Limitations and considerations
• The toolkit depends on hosted third-party services (OpenClaw, Claude/Anthropic, HubSpot, Twilio) that require accounts and API access.
• Operational stability requires gateway uptime and a monitoring layer; the repo includes watchdog scripts but external reliability of WhatsApp sessions can be a constraint.
• Some features (Google Workspace operations, OAuth flows) imply credential management and proper permissions, which influence deployment and access models.

References & tags
OpenClaw, Claude AI, HubSpot, Google Workspace, Twilio, DocSend

🔹 tool #openclaw #whatsapp #claude_ai #hubspot

🔗 Source: https://github.com/navotvolkgroundup/groundup-toolkit

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 16, 2026

----------------

🛠️ Tool
===================

Opening: Antigravity Awesome Skills (Release 5.4.0) is a large-scale GitHub repository that aggregates 857+ agentic "skills"—small markdown files that encode task-specific instructions and workflows intended for multiple AI coding assistants. Supported agents listed include Claude Code, Gemini CLI, Codex CLI, Antigravity IDE, GitHub Copilot, Cursor, OpenCode, and AdaL CLI.

Key Features:
• Cross-agent compatibility: skills are authored to be usable across diverse assistant runtimes and IDE/CLI integrations.
• Curated bundles: starter packs and role-focused bundles (referenced as docs/BUNDLES.md) group relevant skills for specific developer personas.
• Workflow-oriented: packaged workflows aim to make an AI assistant operate as a full-stack digital agency, covering tasks from code generation to deployment-oriented concepts.

Technical Implementation (conceptual):
• Skills are stored as markdown artifacts that include invocation patterns, input/output expectations, and role-play prompts to guide agent behavior.
• Integration points reference official provider capabilities (Anthropic, OpenAI, Google, Microsoft) so that skills can map to provider-specific APIs and CLIs when invoked by an agent front-end.

Use Cases:
• Standardizing coding assistant responses across teams by distributing a shared skillset.
• Rapid composition of multi-step developer tasks (scaffolding, refactors, test generation, CI/CD conceptual flows) using prebuilt workflows.
• Onboarding AI assistants to organization-specific protocols or syntax via reusable skill files.

Limitations:
• Repository is a collection of declarative skill files rather than a single runnable binary; actual behavior depends on the consuming agent and its integration.
• Runtime compatibility and feature parity depend on upstream agent capabilities and provider APIs; not every skill will map identically across all assistants.
• Versioning and maintenance of 857+ items require governance to avoid drift between skill intent and agent semantics.

References:
• Noted components: V5.4.0, docs/BUNDLES.md, and explicit support for Claude Code, Gemini CLI, Codex CLI, Copilot, Cursor, OpenCode, and AdaL.

🔹 tool #agentic_skills #antigravity #claude_code #gemini_cli

🔗 Source: https://github.com/sickn33/antigravity-awesome-skills

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 15, 2026

----------------

🛠️ Tool
===================

Opening:
Logan is an unofficial, community-maintained self-hosted WhatsApp bot designed for group intelligence and moderation. The project combines a WhatsApp client library with LLM-backed processing to provide mention-driven responses, automated summaries, voice transcription and schedule-aware group controls.

Key Features:
• Message logging and summarization: automatic daily recaps per group and optional master channel aggregation.
• Context-aware responses: uses recent conversation context (configured as 15 group messages and 5 user messages) to inform replies triggered by @mentions and keywords.
• Voice handling: transcription of voice notes and generation of voice summaries via TTS.
• Moderation tools: AI-assisted spam detection and removal, plus Shabbat-aware locking/unlocking of groups.
• Rate limiting and triggers: enforces 3 responses per user per minute and recognizes triggers like @mention@infosec.exchange, "logan", and "לוגן".

Technical Implementation:
• Core runtime in Node.js 18+ with TypeScript 5.3+.
• WhatsApp connectivity via the Baileys library for session handling and events.
• LLM backend integration through the Groq API, leveraging LLaMA 3.3 70B class models claimed to offer GPT-4 class performance for prompt handling and summarization.
• Storage and privacy model rely on Supabase for message archival and state management.

Use Cases:
• Community groups requiring automated moderation and concise daily digests.
• Teams that need voice-to-text conversion and on-demand AI summaries inside WhatsApp.
• Hebrew-speaking communities (built with Hebrew support) that want localized personality and scheduling (Shabbat awareness).

Limitations and Considerations:
• Unofficial implementation not affiliated with WhatsApp/Meta; dependent on third-party libraries and APIs.
• LLM backend usage may incur compute or API costs depending on provider quotas and model selection.
• Privacy claims depend on deployment choices; Supabase account and self-hosting decisions determine data residency.

References:
Key technical stack includes Baileys, Node.js, TypeScript, Groq API, LLaMA 3.3 70B, and Supabase.

🔹 tool #WhatsApp #AI #Supabase #LLaMA

🔗 Source: https://github.com/hoodini/whatsapp-public-logan

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Feb 15, 2026

----------------

📚 Frameworks
===================

Executive summary: CERT-EU published its Cyber Threat Intelligence Framework on 13-02-2026 to standardise how malicious cyber activity is classified, assessed and prioritised for Union entities. The framework is designed to support consistent reporting, alerting and situational awareness across strategic and technical dimensions.

Technical details:
• The framework defines the concept of a Malicious Activity of Interest (MAI) to include confirmed compromises, suspicious attempts, adversarial resource development and reconnaissance.
• An explicit ecosystem model narrows monitoring scope to components that affect Union entities: countries of operation, sectors of activity, geopolitical events, partners, providers, and systems and software.
• Content sections enumerated in the document include: threat and counter-threat categories, threat domains, threat levels, threat actor levels, tactics/techniques/procedures (TTPs), sectors of interest, confidence/uncertainties, attribution and scoring mechanisms.
• The framework references alignment with recognised standards and communities: FIRST, NATO and common threat-intel industry practices.

Operational mechanics:
• The framework supports CERT-EU's Full-Spectrum Adversary Approach (a threat-informed defence model) by converting observations into structured data for faster reaction and clearer communication.
• Scoring mechanisms cover adversary and mitigation assessment; the document positions these as enablers for prioritisation by primary operational contacts (POCs) and local cybersecurity officers (LCOs).

Implications for intelligence workflows:
• The structured taxonomy (MAI, ecosystem components, threat domains and levels) provides a shared reference model intended to reduce ambiguity between strategic and tactical reporting.
• Emphasis on data-centric translation of observations aims to improve interoperability and situational coherence across Union entities and partners.

Limitations and scope notes:
• The framework explicitly bounds monitoring to an ecosystem relevant to Union entities to avoid exhaustive global coverage.
• The document notes it may evolve with regulatory changes and stakeholder feedback; specific detection signatures or IoCs are not part of the released framework text.

References:
• CERT-EU Cyber Threat Intelligence Framework (release sections include: introduction; MAIs; ecosystem; TTPs; scoring; confidence; attribution)

🔹 CERT_EU #threat_intel #MAI #Full_Spectrum_Adversary #TTPs

🔗 Source: https://www.cert.europa.eu/publications/threat-intelligence/cyber-threat-intelligence-framework/

View on infosec.exchange
0
0
0
0
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
hasamba
hasamba
@hasamba@infosec.exchange

https:// linktr.ee/yanivr

infosec.exchange
@hasamba@infosec.exchange · Jan 13, 2026
🎯 AI =================== Executive summary: The article documents "AI tool poisoning," an attack in which attackers publish seemingly benign tools whose descriptions or metadata contain hidden instructions. When AI agents ingest those descriptions via Model Context Protocol (MCP) or similar interfaces, the hidden instructions can alter the agent's reasoning and parameter construction, causing sensitive data exposures without changes to tool code. Technical details: • Example artifact: a published tool called add_numbers whose description superficially states "Adds two integers and returns the result," but whose metadata contains an instruction to read ~/.ssh/id_rsa and pass its contents as the sidenote parameter. • Threat mechanism: the agent parses the description during planning; the reasoning layer treats the buried instruction as legitimate guidance and constructs a call that sources local secrets into tool parameters. • Scope: this is a context/metadata manipulation vector rather than code injection; the attacker leverages how agents interpret human-readable tool descriptions. Analysis: • Impact arises from conflating tool interface documentation with operational instructions inside the agent's planning phase. The attacker can compel the agent to access local files, secrets, or other sensitive context values and include them in tool calls, enabling exfiltration without exploiting the tool binary. • This bypasses protections focused solely on tool code integrity because the malicious element is in descriptive metadata consumed by the agent. Detection considerations: • Monitor tool registry metadata for anomalous or imperative phrasing that references local paths, secret identifiers, or data access directives. • Instrument agent reasoning logs to flag parameter sources that originate from sensitive file paths or environment values. Mitigation concepts: • Treat tool descriptions and metadata as untrusted input: validate and sanitize natural-language instructions in metadata before inclusion in agent planning. • Enforce principle of least privilege around what context the agent may access and which local values can be used to populate tool parameters. Limitations: • The article focuses on the conceptual attack and illustrative example; it does not provide exhaustive IoCs or a catalog of affected agent implementations. 🔹 AI #MCP #tool_poisoning #prompt_injection #metadata_manipulation 🔗 Source: https://www.crowdstrike.com/en-us/blog/ai-tool-poisoning/
View on infosec.exchange
0
0
0
0
313k7r1n3

Company

  • About
  • Contact
  • FAQ

Legal

  • Terms of Service
  • Privacy Policy
  • VPN Policy

Email Settings

IMAP: imap.elektrine.com:993

POP3: pop.elektrine.com:995

SMTP: smtp.elektrine.com:465

SSL/TLS required

Support

  • support@elektrine.com
  • Report Security Issue

Connect

Tor Hidden Service

khav7sdajxu6om3arvglevskg2vwuy7luyjcwfwg6xnkd7qtskr2vhad.onion
© 2026 Elektrine. All rights reserved. • Server: 10:30:18 UTC