22 minutes in the pastCreator: Samira Siddiqui

What if an AI might do a hacker’s job sooner, cheaper, and generally higher than the people who receives a commission six-figure salaries to do it? That’s precisely what occurred at Stanford College, the place an AI agent quietly scanned hundreds of computer systems and walked away having crushed {most professional} human hackers at their very own recreation.
In a managed safety check, Stanford researchers let an AI agent unfastened on their pc networks. The result’s that the machine discovered safety holes that skilled people missed, and did it for the value of a takeaway meal.
AI beats human hackers

The picture is created utilizing AI
The AI agent, known as ARTEMIS, was examined towards ten skilled penetration testers, cybersecurity professionals whose job is to legally ‘hack’ methods and discover weaknesses earlier than criminals do.
ARTEMIS scanned round 8,000 gadgets, together with servers, computer systems, and good methods, throughout Stanford’s private and non-private pc science networks. It ran for 16 hours, however researchers primarily in contrast its efficiency towards the primary 10 hours, matching the time allotted to the human testers.
In that window, ARTEMIS uncovered 9 actual safety flaws with 82% accuracy, beating 9 out of ten skilled hackers. In line with the researchers, its efficiency was on par with the perfect human contributors.
In contrast to older AI instruments that lose focus throughout lengthy duties, ARTEMIS was constructed to work independently for hours, scanning, testing, and analysing methods with out human assist.
Additionally learn: Fraud within the title of resort reserving, how scammers entice individuals and how you can keep away from it An enormous hole in price

One of the vital hanging findings wasn’t nearly ability, it was about cash. Operating ARTEMIS prices about $18 (round Rs 1,630) per hour. Even a extra highly effective model prices $59 (round Rs 5,300) per hour.
By comparability, knowledgeable penetration tester within the US earns roughly $125,000 a yr.
One of many researchers stated:
This type of functionality has the potential to dramatically decrease the price of cybersecurity auditing.

The concept isn’t to switch people totally, however to let AI deal with repetitive, time-heavy work that usually slows safety groups down.
How ARTEMIS discovered what people missed

ARTEMIS had a intelligent benefit. Every time it observed one thing uncommon throughout a scan, it instantly launched smaller ‘sub-agents’ to analyze the difficulty in parallel. This meant it might study a number of suspicious areas on the identical time, one thing human testers merely can’t do.
In a single instance, ARTEMIS noticed a weak spot on an outdated server that human hackers skipped as a result of their net browsers couldn’t load it. The AI sidestepped the issue by accessing the system by means of a command-line interface as an alternative.
That skill to adapt gave ARTEMIS an edge in technical, text-based environments.
Additionally learn: Attackers use ChatGPT and Grok to create search-visible hyperlinks that misguide customers The place the AI struggled
Regardless of the spectacular outcomes, ARTEMIS isn’t flawless. The AI struggled with methods that relied closely on graphical interfaces. In a single case, it missed a critical vulnerability as a result of it couldn’t carry out easy actions like clicking buttons or navigating visible menus.
It additionally raised a couple of false alarms, mistaking innocent exercise for potential assaults.
Researchers famous that ARTEMIS works finest in “code-like” environments, the place data is offered as textual content, logs, or instructions, not visible dashboards.
Why this issues for the way forward for cybersecurity

The examine lands at a time when AI is already reshaping cybercrime. Prison teams are utilizing AI instruments to automate assaults, write convincing phishing messages, and even create faux identities.
Additionally learn: Chinese language quantity ‘5201314’ is India’s most-Googled time period in 2025, what does it imply? Latest studies have linked AI instruments to North Korean hackers creating faux army IDs, and to operatives utilizing AI to use for distant jobs at main firms to realize inner entry. Different risk teams have reportedly used AI to plan assaults on telecom and authorities methods.
Quickly, the world’s finest human hackers might not simply be combating cybercriminals. They could even be competing with the machines they helped create.

Leave a Reply