Inspired by research such as “Good Bot, Bad Bot: Characterising Automated Browsing Activity”, this platform introduces a honeypot-driven, event-driven architecture that captures live malicious traffic and employs a cognitive engine to classify intent with high fidelity. This paper details the system’s components, focussing on the stub detectors that form the foundation of its threat intelligence capabilities. The proliferation of automated web threats, including bots, scanners, and exploit seeker crawlers, poses significant challenges to cybersecurity. These actors often employ evasion techniques such as identity rotation and rapid reconnaissance, rendering traditional signature-based defences inadequate. The architecture of the platform, as shown in Figure 1, is centred around a Kafka cluster that serves as the data backbone. The honeypot website generates raw logs (HTTP/HTTPS), which are ingested into a “raw logs" topic within the Kafka cluster. The cognitive engine consumes these logs, analyses them for malicious patterns, and produces enriched data that are published to an "analysed” topic. This enriched data is then indexed by an Elasticsearch cluster for visualisation and querying, with the final threat classifications stored in a PostgreSQL database for actionable intelligence.