Team Members: Jeremy Graves, Justin Johnson, and Quintin Donnelly
PROBLEM TITLE: Associating IP Address with Malicious Activity
A critical step for any analysis of malicious network activity is in the source and destination addresses. The most basic information like WHOIS and registry information are easy to programmatically retrieve, but large quantities of rich data exist in less formatted sources like forums, reports, and published black and white lists. The ability to rapidly gather this information and present it in a single report would effectively remove a lengthy step from analysts’ process.
Analysts cannot synthesize and report IP address or domain and associate with malicious activity in a timely manner.
Must be able to function through a proxy or run as a web service
Develop a programmatic technique to research an IP address or domain for association with malicious activity, and present a summarized report. The type of information requested ranges from attributes like registry details to mentions of the IP or domain in security forums, incident reports, and black lists.
National Security Agency (NSA), Jill (email@example.com)
Given some data (e.g., IP address or URL) we can scan selected websites for a correlation to malicious activity or misuse and provide a summarized report of what is found.
Where we are?
Making contacts and lining up interviews while researching technologies and sites that would be useful in addressing this problem. Found a security related open source project that might have some usefulness. OWASP has chapters and members in multiple states throughout the U.S. Sent emails off to C-Level contacts in multiple locations. Have received contact and support from problem sponsor who is assisting in lining up candidates for interview.
June 12, 2017 – Bob Hopkins, Chief of Police at The University of Southern Mississippi.
Gave a brief overview of problem and discussed what we might be providing as a solution. Asked Mr. Hopkins, “If what we will be developing would be useful to his office or tasks.” The response we received was that a majority of what they deal with or in is criminal related. IP and URL data and correlations wouldn’t be that useful to their office.
Now if the same application could search other sites or similar sites for details in regards to an individual name, phone number, address, or some other related detail and quickly provide summarized discovery. This information might speed up their investigations into a crime.
In the interview we also discovered that some thought might need to be taken into from where and how data is acquired. Acquiring from the wrong location in their case might cause a criminal to get out of jail free so to speak.
June 13, 2017 – Bob Wilson, Technology Security Officer at The University of Southern Mississippi.
Discussed threat intelligence and the problem given to our H4D group. Bob stated, “That researching IP addresses and URLs wasn’t a high priority for him, and identifying maliciousness of addresses and URLs were tough.”
For him having an application or device that monitored the network realtime for threats and that provided threat intelligence was more important. Also having the ability to inventory the nodes on the campus network would be a useful tool. And a big concern was security awareness and how to address with so many users.
June 14, 2017 – Neal, Researcher within DoD, performs passive data collection and research on hacking tools.
Need is for tool that given IP Address/Subnet/URL perform research and return a report. The research should look for whether address is client, server, or both; it’s web presence or history; SSL certificate validity and information; who owns it. This interviewee said a score on maliciousness of address or URL would be nice if the formula used to get the score was provided. The pain they are suffering is that their, ” Drowning in Data ” and the tool has potential to alleviate some of that pain.
June 14, 2017 – Andy, Analyst or Data Scientist, primarily looks for suspicious traffic.
Identified that our application needed to use popular security forums in its research. Wouldn’t use active scanning in application so as not to alert the potential offender(s). Felt that if application was done right and received well that it could save up to a few hours per IP address or URL. Removing some of the manual research required in open source or public domain.
Things we’ve learned so far.
- Analysts are using manual means to research suspicious traffic.
- 5-10% of their time throughout the year could be re-purposed by automating the research process.
- Mean Analyst pay is approximately $44/hour and their are 82,000 analysts per the department of labor.
- Using a conservative estimate based on department of labor data we are figuring over $300,000,000 is being spent on IP research.
- There is static in time big data available in repositories such as Amazon Web Services that might be usable in speeding up the automated research process.