Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mothra: A Large-Scale Data Processing Platform for Network Security Analysis Tony Cebzanov Good morning, my name is Tony Cebzanov. I am a software engineer.

Similar presentations


Presentation on theme: "Mothra: A Large-Scale Data Processing Platform for Network Security Analysis Tony Cebzanov Good morning, my name is Tony Cebzanov. I am a software engineer."— Presentation transcript:

1 Mothra: A Large-Scale Data Processing Platform for Network Security Analysis
Tony Cebzanov Good morning, my name is Tony Cebzanov. I am a software engineer with the CERT Security Automation directorate at the CMU Software Engineering Institute, and today I’m going to present some details about Mothra, which is a project we’ve been working on to develop a modern, scalable, and easily-extensible architecture for network analysis.

2 Mothra: A Large-Scale Data Processing Platform for Network Security Analysis
Copyright 2016 Carnegie Mellon University This material is based upon work funded and supported by the Department of Defense under Contract No. FA C-0003 with Carnegie Mellon University for the operation of the Software Engineering Institute, a federally funded research and development center. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Department of Defense. NO WARRANTY. THIS CARNEGIE MELLON UNIVERSITY AND SOFTWARE ENGINEERING INSTITUTE MATERIAL IS FURNISHED ON AN “AS-IS” BASIS. CARNEGIE MELLON UNIVERSITY MAKES NO WARRANTIES OF ANY KIND, EITHER EXPRESSED OR IMPLIED, AS TO ANY MATTER INCLUDING, BUT NOT LIMITED TO, WARRANTY OF FITNESS FOR PURPOSE OR MERCHANTABILITY, EXCLUSIVITY, OR RESULTS OBTAINED FROM USE OF THE MATERIAL. CARNEGIE MELLON UNIVERSITY DOES NOT MAKE ANY WARRANTY OF ANY KIND WITH RESPECT TO FREEDOM FROM PATENT, TRADEMARK, OR COPYRIGHT INFRINGEMENT. [Distribution Statement A] This material has been approved for public release and unlimited distribution. Please see Copyright notice for non-US Government use and distribution. This material may be reproduced in its entirety, without modification, and freely distributed in written or electronic form without requesting formal permission. Permission is required for any other use. Requests for permission should be directed to the Software Engineering Institute at Carnegie Mellon®, CERT®, CERT Coordination Center® and FloCon® are registered marks of Carnegie Mellon University. DM

3 Agenda Introduction Architecture and Design Demonstration Future Work
I’ll start by giving a brief overview of what our goals were in developing Mothra. Next, I’ll delve into some of the architectural details of the platform, providing some details about design decisions we made, and why made them. Then I’ll provide a brief tour of the platform’s features. Finally, I’ll close by summarizing some of the future work we’re planning as part of this project.

4 Mothra: A Large-Scale Data Processing Platform for Network Security Analysis
Introduction

5 In the beginning... there was Netflow
Introduction In the beginning... there was Netflow Netflow was designed to retain the most important attributes of network conversations between TCP/IP endpoints on large networks without having to collect, store, and analyze all of the network's packet-level data For many years, this has been the most effective way to perform security analysis on large networks Over time, demand has increased for a platform that can support analytical workflows that make use of attributes beyond the transport layer Modern flow collectors can export payload attributes at wire speed The challenge is scalable storage and analysis The current generation of distributed data processing platforms provides tools to address this challenge The first thing I’d like to do is contextualize what we’re doing with Mothra by talking about how Netflow analysis has evolved over time. I want to stress that when I say “flow” or “Netflow” in this talk, I’m not talking about any specific format or platform, but speaking more generally about any format or platform that enables scalable analysis of network traffic by summarizing and aggregating packet-level data into higher-order records. With that said, the canonical form of Netflow that we tend to think of at CERT is a fixed-length record containing the “5-tuple” of source IP, source port, destination IP, destination port, and protocol, plus the start and end timestamps for the flow in question. These fields are among the most important in distinguishing one network conversation from another on a TCP/IP network. Historically, we’ve found that retaining only these key fields plus a handful of other fixed-length fields such as byte and packet counts and TCP flags has enabled scalable analysis of network traffic, even on the largest of networks. By summarizing entire network conversations with just a few dozen bytes each, network analysts have for many years had the ability to quickly scan over millions of records to find the traffic they’re interested in, even before the dawn of multi-core CPUs and machines with many gigabytes of RAM on them. This model has worked well, but over time, analysts have asked for the ability to collect finer-grained details of network conversations, including some protocol fields that vary greatly in length and might not appear in every record. Collecting these fields is relatively easy, and many modern flow collectors are able to collect and export these attributes, but being able to efficiently store and analyze that data can be challenging, even with the most powerful hardware available today. The good news is that newer platforms have emerged that provide a way forward for this level of analysis, and the Mothra project represents our vision of how incorporate those platforms into an architecture that serves the network defense community.

6 Introduction Project Goals The Mothra security analysis platform enables scalable analytical workflows that extend beyond the limitations of conventional flow records. With the Mothra project, we aim to: facilitate bulk storage and analysis of cybersecurity data with high levels of flexibility, performance, and interoperability reduce the engineering effort involved in developing, transitioning, and operationalizing new analytics serve all major constituencies within the network security community, including data scientists, first-tier incident responders, system administrators, and hobbyists The overarching goal of the Mothra project is to enable scalable network analysis workflows that extend beyond the limitations of conventional flow records. For this goal to be achievable, we need to build a platform that facilitates bulk storage and analysis of network telemetry data in a way that’s flexible, scalable, and easily integrated with other systems. We also need to make sure that new analytics are easy to develop, transition, and deploy. Finally, we need to ensure that we serve everyone with an interest in cybersecurity, whether they’re a seasoned data scientist, a first-tier responder, a sysadmin, or just someone who wants to defend their home or small office network and doesn’t have the resources to employ an army of analysts and incident responders.

7 SiLK: The Next Generation?
Introduction SiLK: The Next Generation? Mothra is not the next version of SiLK SiLK’s design philosophy was inspired by UNIX Command-line tools that each focus on doing one thing well Tools are composable into analytics via shell scripting Fixed-length record formats for optimal performance With larger, variable-length records, this design can’t scale Solution? Throw more hardware at the problem (“big data”) We view SiLK and Mothra as complementary projects that will be developed in parallel for the foreseeable future SiLK still performs well for queries that don’t look beyond layer 4 Mothra enables more complex analyses at a scale beyond the capability of SiLK’s single-node architecture Some of you might be thinking, well, CERT develops SiLK, and here’s this guy talking about a new Netflow analysis platform, so this must be the new version of SiLK. I want to emphasize that this is not the case. It’s not that there aren’t overlaps between the capabilities of the two platforms, but because Mothra represents a significant departure from the architecture and design of SiLK, it makes sense to treat it as a new project, albeit one that’s trying to solve some of the same problems. As many of you are aware, SiLK employs a very UNIX-y design philosophy, with command-line tools that each try to do one thing well, operating on fixed-length binary records, all glued together by shell scripts. Unfortunately, this single-node design can be limiting, and once you start bringing in variable-length formats and large volumes of payload attributes, performance simply isn’t what people have come to expect from SiLK. So, we’re going to throw hardware at the problem. We’ve been evaluating various big data technologies in recent years, developing and testing some proof of concept implementations, but until recently, nothing we tried hit the sweet spot we were looking for in terms of scalability and ease of use. With this project, we feel like we’re getting close enough to start talking about it with the community to solicit feedback and share our progress so far, which is why I’m here today. And let me assure those of you who love SiLK that we plan to continue to develop and maintain SiLK and Mothra independently. SiLK still scales very well for many workloads, but Mothra enables more complex analyses at a larger scale. So we view the two projects as complementary.

8 Architecture and Design
Mothra: A Large-Scale Data Processing Platform for Network Security Analysis Architecture and Design In this next set of slides, I’m going to go into a bit more detail about Mothra’s architecture and how it differs from that of SiLK.

9 Architecture and Design
YAF to SiLK Data Flow This is what things look like at the network edge for someone who’s using YAF and SiLK to collect and analyze Netflow. In this example, YAF collects packet-level data and exports Netflow via the IPFIX protocol, which is a template-based IETF protocol specifically designed for transmitting Netflow data. Unlike some other Netflow formats, IPFIX records need not be fixed-length, and can contain a wide variety of application-level protocol data. This is an example of what that data might look at if you take look inside the IPFIX stream. In these slides, I’m only showing a small subset of the fields that YAF exports for demonstration purposes, so the specifics of what templates and fields are present in a real IPFIX stream will likely differ from what’s shown here. In this example, you can see three sample network conversations containing DNS, HTTPS, and SMTP information, along with the standard Netflow fields I talked about earlier, shown here with the blue background. In a SiLK architecture, this IPFIX stream is then consumed by the rwflowpack tool, which packs the IPFIX information elements into flow records that are then stored in a SiLK repository. If we look at what this looks like on disk, we can see that SiLK has collected some information about all three flows, but the lower-level protocol fields containing information about DNS, HTTP, SSL, and SMTP have all been dropped. This is because SiLK has no mechanism for handling these fields, which means that if you wanted this level of detail, you’d have to go back to the original packet data, using SiLK key fields as an index. This is sub-optimal, so we’ve developed some tools to help improve handling of these application protocol fields. To show how this works, let’s say you want to facilitate collection and analysis of DNS data. The first thing you need is a tool that can process the data coming from YAF and extract the information you care about. One such tool is super_mediator, which you can think of as a Swiss Army knife for IPFIX data. In this case, it’s used to consume IPFIX data from YAF and export a subset of the IPFIX data into CSV files containing the fields you want for each record, and then you simply load the data into a relational database. Now you have a database containing the DNS information you want, but what if you then want to collect another data type, like SSL certificate information? Here is where the weaknesses of this approach begin to reveal themselves. super_mediator can easily be configured to write CSV files with other data types in them including SSL fields and populate another database table. Querying across these in the same database should be efficient, but when you want to correlate with SiLK flow records, you’re no longer getting the benefits of either the relational database or the SiLK tool suite. There are workaroun ds – you can populate a temporary database, or you can use SiLK bags and pmaps to load some of the fields in, but none of these workarounds is satisfactory.

10 Architecture and Design
YAF to Mothra Data Flow With Mothra, things at the sensor edge are very similar. YAF collects data and exports IPFIX, with details about application-level protocols but instead of processing that IPFIX with SiLK, we simply load the IPFIX as-is into a repository. The repository in this example is HDFS, which I’ll talk about later, but for now just think of it as a scalable, high-capacity network file system that holds the IPFIX data. But how do we analyze that data? We spent a lot of time trying out various answers to this question, and so far, the most promising answer has been the Apache Spark platform. Spark is an open-source distributed data processing platform that has many compelling features, which I’ll go over in a minute. Mothra’s core libraries are built on top of Spark, and serve as a means of bringing Netflow data into the Spark ecosystem, at which point all further operations with the data are done using APIs provided by the Spark platform. One of those APIs is the DataSet API, which allows for easy creation of table-like data structures that can be easily transformed and queried. I’ll show more detail on what this looks like later, but at a high level, if you want to build a DNS dataset, you can do so with a single line of code, and then write that dataset back to HDFS. The same process can be used to create a dataset for SSL information, and, importantly, all of these datasets live in the same repository as the original IPFIX data, and can be easily exported for ingest into other systems, or queried directly to build new derivative data sets on the fly, without having to touch anything at the sensor edge or any upstream configuration, which represents a significant reduction in the work required to deploy a new analytic workflow.

11 Architecture and Design
SiLK vs. Mothra Mothra departs from SiLK’s UNIX-like design in significant ways: SiLK Command-line tools, mostly written in C, with some Python Analytics are written as UNIX shell scripts Mothra Built on Apache Spark, a cluster computing framework Written primary in Scala, which runs on the Java Virtual Machine Language bindings for Java, Python, R, and SQL Runs standalone or on an existing cluster platform (e.g. Hadoop) Mothra’s core libraries are written in Scala and Java Analytics can be written using any language Spark supports A web notebook interface is provided for developing analytics Now that we’ve looked at differences in how data flows through SiLK and Mothra, let’s compare some of the architectural and implementation details of the platforms themselves. For those who aren’t aware, SiLK is primarily written in C, with some Python API capability in some of the tools. By combining these tools into shell scripts, analysts can deploy new analytic workflows that provide functionality not available out of the box by any single SiLK tool. If this technique is found to be generally useful, it may be considered for inclusion in SiLK, either as a shared library plugin, or as a dedicated command-line tool, but this generally requires a SiLK developer porting the technique from shell script to C. The technology stack for Mothra is quite different. As I mentioned earlier, Mothra is built around the Apache Spark platform, which is written in Scala. Scala is a modern static-typed general purpose programming language that compiles to Java bytecode and runs on the Java Virtual Machine. Importantly, one need not know any Scala to work with Spark, as it provides bindings and interfaces to other languages, including Java, Python, R, and SQL. Analysts can work using whichever of these languages they’re comfortable with. Spark can run standalone (meaning it manages its own compute resources) or on top of an existing cluster platform like Hadoop. The Mothra core libraries are written in Scala and Java, but analyses can be developed in any language supported by Spark itself. We are not introducing our own domain-specific language with Mothra, choosing instead to utilize the capabilities already provided by Spark, and only writing our own extensions when necessary. Finally, instead of a SiLK-like command-line interface, Mothra provides a web notebook interface for interacting with the platform. I’ll show you what this looks like later, but we feel that it’s a vast improvement in terms of usability, especially for new users.

12 Platform Languages and Technologies
Architecture and Design Platform Languages and Technologies Here’s a more visual look at what all of this looks like in terms of the languages each platform uses, and the interfaces each platform provides for developing new analytical capabilities. Both platforms sit on top of the base operating system, but that’s where the similarities end. SiLK libraries and tools are written in C and Python, with analytics written as Bash shell scripts. Mothra libraries are written in Scala and Java, with interfaces for developing analytics provided by Spark, which allows for them to be written in any of Scala, Java, Python, R, or SQL, depending on the needs of the analyst. Another thing I want to highlight in this slide is the components of each platform that are purpose-built for flow analysis. On the SiLK side, the red font color denotes components developed and maintained by the SiLK developers. As you can see, this is essentially the entire stack, save the C standard library and the OS itself. On the Mothra side, only the components with the blue font color are written by us, with everything else coming from development teams dedicated to improving those components for use across knowledge domains, not just for network analysis.

13 Architecture and Design
Why Spark? Building Mothra on an established platform like Spark, with its active industry-sponsored open source development community, allows us to focus on components that deliver value to analysts. The Spark platform: enables a degree of scalability not possible with SiLK supports higher-level languages for faster development and transition of core functionality and analytics provides consistent interfaces to a variety of data sources includes libraries for graph analysis and machine learning integrates well with other big data platforms and technologies We feel that the emergence of the Spark platform represents a great opportunity to focus on the components that can help bring network data into an industry standard ecosystem rather than having to write our own data structures, machine learning algorithms, and language bindings any time someone wants something new from the platform. Conversely, the chances are much greater that someone has already developed something to solve a Netflow problem if the same platform is used for people solving problems in other domains.

14 Architecture and Design
Mothra Architecture This diagram shows view of the various components of the Mothra platform. We have input modules for both IPFIX and SiLK data, which we then bring into the Spark platform. At that point, users work with Spark APIs directly through the interface of their choice. At this time, the primary interface is Jupyter notebook, but hardcore command-line junkies can use the spark-shell interpreter if they prefer a more SiLK-like experience. We’re also planning to integrate with operational analysis consoles, including ElasticSearch Kibana and Splunk. The components in blue are components we develop at CERT – everything else is part of Spark or another 3rd party platform such as Apache Hadoop. Among other advantages mentioned on the previous slide, these platforms provide for scalable distributed processing and fault-tolerance by replicating data to multiple nodes within the cluster, along with monitoring and configuration management of cluster resources.

15 SiLK vs. Mothra Scalability
Architecture and Design SiLK vs. Mothra Scalability Simply adopting a cluster computing platform doesn’t magically make every workflow scale well, but what we’ve found so far is that, while SiLK can outperform Mothra on some queries, Mothra tends to win as data volumes grow. When SiLK’s limited set of protocol fields is sufficient, and when queries are simply filtering and running a couple of tools to count or aggregate data, SiLK can beat Mothra. This is because there’s a fixed cost of starting up each Spark executor, and some variable costs involved in distributing work to each node in a cluster, writing data to disk, and transferring large volumes of data across the network. SiLK, of course, with its single-machine and mostly single-core design, doesn’t have any of these problems. However, once you want to analyze deeper protocol fields, and once you’re talking about complex queries with lots of data, these costs of distributing work in a Spark cluster are worth the benefit of spreading the work out to dozens or even hundreds of nodes. Spark enables entire classes of analysis that SiLK simply can’t do today, and speeds up many SiLK analyses that are possible but time-consuming.

16 Architecture and Design
User Interfaces Mothra uses Jupyter Notebook as an exploratory analysis UI: Rich web-based interface Input cells for developing and executing code Output cells display analysis results, including visualizations Markdown for annotations with rich text Simple sharing and publishing of analytics and results Less daunting for novices to learn than the UNIX command-line For CLI fans, jupyter-console and spark-shell are available As I briefly mentioned earlier, Mothra’s primary user interface is Jupyter Notebook, which is a rich, web-based interface that enables analysts to develop scripts interactively, seeing the results of each block of code in real-time, with facilities for visualizing data, annotation, and easy sharing and publication of results. Our experience has shown that this interface is easier for novices to learn, and over time, even grizzled command-line vets begin to appreciate the power of notebook interfaces.

17 Jupyter Notebook This is what the Jupyter interface looks like.
Architecture and Design Jupyter Notebook This is what the Jupyter interface looks like. Here, you can see the annotation cell at the top describing the code below which is shown in an input cell. Running the code in the input cell causes the output to appear below. These cells can be easily created and moved around the notebook, and an outlining feature allows for related cells to be grouped together, expanded, and collapsed. This makes developing a new analytic more like writing a document, and publishing results as simple as sharing that document with others.

18 Mothra: A Large-Scale Data Processing Platform for Network Security Analysis
Demonstration Now we’ll take a closer look at a sample session using the Mothra platform.

19 Field Specifier Syntax
Demonstration Field Specifier Syntax IPFIX fields are specified using strings of the following format: [path][/][operator]name[:format][=alias] where: path (optional) is a path to the desired information element paths are made up of template names, delimited by / characters if path is empty, the field specifier will look for the given element in all top-level IPFIX records operator (optional) is a character indicating how the information element should be treated currently, the only operator indicating that the IE is a basicList field name (required) is the name of the IPFIX informtaion element format (optional) is a string indicating how the field should be formatted in the data frame current formats are: str – format IE as a string iso8601 – format IE as an ISO-8601 date string base64 – format IE as as base64-encoded string alias (optional) is a string to be used for the data frame column name instead of the IPFIX IE name if alias is unspecified, the column name will default to the IPFIX IE name First, I need to tell you about field specifiers. Fields in SiLK have well-known names – sip for source IP, dport for destination port, stime for start time, etc. In IPFIX, things are a little different. Fields, which are called information elements in IPFIX parlance, do have well-known names, but because IPFIX is so flexible, the names tend to be longer and more descriptive. Also, because IPFIX is a template-based format and template fields can be nested arbitrarily, Mothra needs a syntax for specifying what fields you want to obtain data from in an IPFIX stream. That syntax is shown here. This should look familiar to anyone familiar with XPATH or CSS selectors. Every field specifier in Mothra needs at least a name which corresponds to the name of the information element you want to access. For many fields, that’s enough, but if you want to only get that information element from a particular IPFIX template, or if you want to give the information element an alias that can be used as a friendly name for the field, or if you want to format the field’s contents in a certain way, Mothra field specifiers enable this functionality.

20 Field Spec Examples flowStartMilliseconds
Demonstration Field Spec Examples flowStartMilliseconds the flow start time in milliseconds at the top level of any IPFIX record flowStartMilliseconds:iso8601 same, but formatted as an ISO-8601 string flow_total_ip6/flowStartMilliseconds:iso8601 same, but only if the top-level record matches the IPv6 template flow_total_ip6/flowStartMilliseconds:iso8601=stime same, but rename the field to stime the HTTP user agent list within the http template a list of SSL certificates in the flow, each base64-encoded

21 Spark DataFrames From the Spark documentation:
Demonstration Spark DataFrames From the Spark documentation: "A DataFrame is a distributed collection of data organized into named columns. It is conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood.” To build a DataFrame in Mothra with the Scala language interface: To count the number of records: One of Spark’s more useful interfaces is the DataFrames API. Here’s a brief description of dataframes from Spark. In the input cell is the code that builds a Spark dataframe from IPFIX input data. Once you execute this cell, the input_df variable is a reference to a dataframe containing the IPFIX records in the input file. You can then count the records with the .count() function.

22 Demonstration Queries: Filtering Spark DataFrame API maps reasonably well to rw* commands: Filtering (rwfilter) Syntax is similar with the Python API: Spark dataframe API calls exist for a majority of the commonly-used SiLK tools. Where you would do an rwfilter in SiLK, you’ll do a dataframe.filter() call in Spark. The syntax is a little different for Scala and Python, but the naming of the functionality of the API calls is the same.

23 Queries: Column Selection and Display
Demonstration Queries: Column Selection and Display Column selection & display (rwcut) Here is an example of selecting some columns from the dataframe for display. In SiLK, you’d use rwcut for this, in Spark, it’s dataframe.select() with a list of field names to select the columns, and .show() to display the results.

24 Queries: Sorting and Aggregation
Demonstration Queries: Sorting and Aggregation Sorting (rwsort) : Aggregation (rwuniq, wtotal, rwstats, ...) : To sort records, you just call the dataframe.sort() method with the name of the field you want to sort on. There are several SiLK tools that do various aggregate calculations across records. Here’s an example of how you would do this in Spark. The .groupBy function is used to bin records by destination IP, and then the averages of both the packet and byte columns are calculated across each IP.

25 Queries: Full SQL Syntax
Demonstration Queries: Full SQL Syntax Full SQL syntax I hope that the previous examples have shown how the Spark dataframe API makes it pretty easy to express even complex queries in a terse but understandable manner. However, for analysts who know SQL and prefer to use it to express their queries, Spark dataframes can easily be queried using arbitrary SQL statements. This example is like the groupBy aggregate query on the previous slide, except it calculates both sums and averages across the packets and bytes columns.

26 Queries: Compound & Nested
Demonstration Queries: Compound & Nested Compound query example Build dataframe of SSL flows with a known bad SSL cert Build dataframe of DNS flows with non-null qname Join two dataframes on the sip field Select the sip, sslCertificate, and dnsQName fields The final example I’ll show here involves building two dataframes, one for SSL fields and one for DNS fields. The SSL dataframe is filtered based on a known suspicious certificate, then joined on the DNS dataframe using the source IP column. From that resulting dataframe, we select the source IP, the SSL certificate, and the DNS query name.

27 Mothra: A Large-Scale Data Processing Platform for Network Security Analysis
Future Work To wrap things up, I’ll briefly mention a few things we’re working on now or plan to work on in the future, and mention some related projects that we’re aware of.

28 Future Work On the horizon: Streaming ingest from sensors
Operational analyst console integration Simplified deployment and configuration Open source release Integration of useful components into other FOSS projects Streaming ingest from sensors Right now, IPFIX files are brought into Mothra via a batch process. We plan to investigate streaming projects including Apache Kafka and NiFi to help facilitate real-time streaming of IPFIX records into the cluster. Steaming analytics Similarly, analysis in Mothra is retrospective in nature, just as with SiLK. We would like to facilitate real-time streaming analysis as well. Operational analyst console integration As mentioned earlier, we’re targeting Kibana and Splunk as candidates for integration. Simplified deployment and configuration “Step 1: Set up a Hadoop cluster” is fine when you have a DevOps team, not great for users who just want to try it out Open source release Integration of useful components into other FOSS projects

29 Similar / Related Projects
Future Work Similar / Related Projects Apache Metron (incubating) Sponsored by Hortonworks, Rackspace, Cisco, and others Apache Spot (incubating): Sponsored by Cloudera, Intel, EBay, and others Similar in scope and scale, different in emphasis and design As these projects mature and grow in popularity, we may pursue integration opportunities Metron has been around longer, Spark’s the new kid on the block Metron uses YAF for netflow, Spot uses nfcapd Neither is getting deep packet data from these probes Each could benefit from the inclusion of Mothra components As these projects mature and grow in popularity, we may look for opportunities to pool resources with the developers of these projects and share our code and ideas with them.

30 Questions? CERT NetSA Tools Home http://tools.netsa.cert.org Contact
Mailing list


Download ppt "Mothra: A Large-Scale Data Processing Platform for Network Security Analysis Tony Cebzanov Good morning, my name is Tony Cebzanov. I am a software engineer."

Similar presentations


Ads by Google