DFFML / Rolling Alice: Progress Reports

Add meeting Rate page Subscribe

DFFML / Rolling Alice: Progress Reports

These are all the meetings we have in "Rolling Alice: Progr…" (part of the organization "DFFML"). Click into individual meeting pages to watch the recording and search or read the transcript.

10 Oct 2022

  • 1 participant
  • 12 minutes
docker
bootloader
linux
vm
reboot
distro
loader
mount
efi
process
youtube image

30 Aug 2022

Rolling Alice: https://github.com/intel/dffml/tree/alice/docs/tutorials/rolling_alice
Slide Deck (commenting enabled): https://docs.google.com/presentation/d/1WBz-meM7n6nDe3-133tF1tlDQJ6nYYPySAdMgTHLb6Q/edit#slide=id.p
Engineering Log Entry: https://github.com/intel/dffml/discussions/1406#discussioncomment-3510908
Status Updates Playlist: https://www.youtube.com/playlist?list=PLtzAOVTpO2jZltVwl3dSEeQllKWZ0YU39

- Who is Alice?
- Alice will be our developer helper and one day a developer herself. She helps us understand and preform various parts of the software development lifecycle.
- We currently extend her by writing simple Python functions which can be distributed or combined in a decentralized way.
- She is built around a programming language agnostic format known as the Open Architecture.
- Eventually we will be able to extend any part of her in any language, or have parts be driven by machine learning models.
- What is the Open Architecture?
- It's the methodology that we use to interpret any domain specific description of architecture.
- We are developing the open architecture so that we can do a one hop on analysis when looking at any piece of software from a security or other angle.
- Having this generic method to describe any system architecture allows us to knit them together and assess their risk and threat model from a holistic viewpoint.
- Why work on the Open Architecture?
- We want this to be a machine and human interpretable format so that we can facilitate the validation of the reality of the code as it exists in it's static form, what it does when you execute it, and what we intend it to do.
- Intent in our case is measured by conference to and completeness of the threat model, and therefore also the associated open architecture description.
- The entity analysis Trinity
- The entity analysis Trinity helps us conceptualize our process. The points on our Trinity are Intent, Dynamic Analysis, and Static Analysis.
- By measuring and forming understanding in these areas we will be able to triangulate the strategic plans and principles involved in the execution of the software as well as it's development lifecycle.
- We use the Trinity to represent the soul of the software.
- What happens when we work on Alice?
- We build up Alice's understanding of software engineering as we automate the collection of data which represents our understanding of it.
- We also teach her how to automate parts of the development process, making contributions and other arbitrary things.
- Over time we'll build up a corpus of training data from which we'll build machine learning models.
- We will eventually introduce feedback loops where these models make decisions about development / contribution actions to be taken when given a codebase.
- We want to make sure that when Alice is deciding what code to write and contribute, that she is following our organizationally applicable policies. As outlined maybe in part via our threat model.
- Who is working on Alice?
- The DFFML community and anyone and everyone who would like to join us.
- Our objective is to build Alice with transparency, freedom, privacy, security, and egalitarianism as critical factors in her strategic principles.
- You can get involved by engaging with the DFFML community via the following links - Every time we contribute new functionality to Alice we write a tutorial on how that functionality can be extended and customized.
- We would love if you joined us in teaching Alice something about software development, or anything, and teaching others in the process.
- It's as easy writing a single function and explaining your thought process.
- The link on the left will take you to the code and tutorials. - We are also looking for folks who would like to contribute from by brainstorming and thinking about AI and especially AI ethics.
- The link on the right will take you a document we are collaboratively editing and contributing to.
- Plans
- Ensuring the contribution process to what exists (`alice please contribute`) is rock solid.
- Building out and making `alice shouldi contribute` accessible and ready for contribution.
- Engaging with those that are collecting metrics (https://metrics.openssf.org) and ensuring our work on metric collection bears fruit.
- Following our engagement on the metric collection front we will preform analysis to determine how to best target further `alice please contribute` efforts and align the two with a documented process on how we select high value targets so that others can pick up and run with extending.
- Later we'll get into more details on the dynamic analysis portion of the Trinity, where we'll work, over time, across many program executions of the code we are working on, to understand how it's execution maps to the work that we're doing via our understanding of what we've done (`please contribute`) and what we we're doing it on (`alice shouldi contribute`).
  • 1 participant
  • 5 minutes
alice
conceptualize
architecture
entity
functionality
agnostic
evolves
software
analysis
collaboratively
youtube image

17 Aug 2022

Welcome to Second Annual
OWASP AppSec Pacific Northwest
OWASP Chapters of Victoria, Vancouver, and Portland have combined to deliver an amazing event for application security practitioners on June 11th, 2022.


----------------------- John L. Whiteman ------------------

Security Researcher at Intel Corporation

John L. Whiteman is a security researcher for Intel and a part-time adjunct cybersecurity instructor for the University of Portland. He also teaches the UC Berkeley Extension’s Cybersecurity Boot Camp. John holds a Master of Science in Computer Science from Georgia Institute of Technology. He possesses multiple security certifications including CISSP and CCSP. John has over 20 years of experience in high tech with over half focused on security. He can also hear John host the OWASP PDX Security Podcast online. John grows wasabi during his “off” hours.

https://twitter.com/johnlwhiteman1
https://www.linkedin.com/in/johnlwhiteman/

Presentation Abstract
Living Threat Models Are Better Than Dead Threat Models
The cornerstone of security for every application starts with a threat model. Without it, how does one know what to protect and from whom? Remarkably, most applications do not have threat models, take a look at the open-source community. And, even if a threat model is created, it tends to be neglected as the project matures since any new code checked in by the development team can potentially change the threat landscape. One could say that the existing threat model is as good as dead if such a gap exists.

Our talk is about creating a Living Threat Model (LTM) where the same best practices used in the continuous integration of source code can aptly apply to the model itself. LTMs are machine readable text files that coexist in the Git repository and, like, source code, can be updated, scanned, peer reviewed and approved by the community in a transparent way. Wouldn’t it be nice to see a threat model included in every open-source project?

We need to consider automation too to make this work in the CI/CD pipeline. We use the open-source Data Flow Facilitator for Machine Learning (DFFML) framework to establish a bidirectional data bridge between the LTM and source code. When a new pull request is created, an audit-like scan is initiated to check to see if the LTM needs to be updated. For example, if a scan detects that new cryptography has been added to the code, but the existing LTM doesn’t know about it, then a warning is triggered. Project teams can triage the issue to determine whether it is a false positive or not, just like source code scans.

We have been working on this effort for a few years and feel we are on the right track to make open-source applications more secure in a way that developers can understand.
  • 3 participants
  • 58 minutes
security
threats
intel
speakers
john
credentials
interview
discussion
editors
recommend
youtube image

29 Jul 2022

See https://www.youtube.com/watch?v=u2lGjMMIlAo&list=PLtzAOVTpO2ja6DXSCzoF3v_mQDh7l0ymH or skip to https://www.youtube.com/watch?v=JDh2DARl8os&t=313s for contributing to Alice please contribute recommended community standards
Status Updates: https://www.youtube.com/watch?v=THKMfJpPt8I&list=PLtzAOVTpO2jZltVwl3dSEeQllKWZ0YU39
Engineering Log: https://github.com/intel/dffml/discussions/1406#discussioncomment-3279821

The first 5 minutes of this where we do the download and fight the network got lost.
  • 1 participant
  • 16 minutes
alice
repo
project
recap
installing
currently
pip
publishing
debugging
proactive
youtube image

28 Jul 2022

SCITT meeting session at IETF114
2022/07/28 1730

https://datatracker.ietf.org/meeting/114/proceedings/
  • 31 participants
  • 1:60 hours
bath
discussions
having
meet
endeavor
preparing
interim
elliot
minutes
bob
youtube image

27 Jun 2022

  • 1 participant
  • 15 minutes
thread
alice
brainstorming
helper
implementation
v2
discussion
maintaining
debugging
realized
youtube image

3 Jun 2022

  • 1 participant
  • 14 minutes
alice
project
implemented
realization
representation
manifest
primitive
serialization
recap
thread
youtube image

2 Nov 2019

As people with the word “security” in our titles, we come across a lot of questionable decisions. It’s our job to scrutinize the dubious and guide the less paranoid. Wide eyed developers in a dependency wonderland can easily find themselves smoking opiumssl with a caterpillar from stackoverflow who assured them it’s twice as performant than openssl. Nevermind the fact that it was written by @madhatter in 2012 and never touched since. In our infinite wisdom we set them back on the right track. But how wise are we really? Could a robot do just a good a job at guiding them through the looking glass?

Security research, embedded systems, machine learning, and data flow programming are his current interests. He’s on the Open Source Security team at Intel. He’s from Portland, went to PSU for computer engineering with a focus on embedded systems and did his honors college thesis on machine learning. He’s been working at Intel as an intern then an employee for the past 5 years.
  • 2 participants
  • 19 minutes
intel
security
cpu
linux
software
dependencies
automated
reviewing
important
repos
youtube image

10 Feb 2019

John L. Whiteman

Static application security testing (SAST) is the automated analysis of source code both in its text and compiled forms. Lint is considered to be one of the first tools to analyze source code and this year marks its 40th anniversary. Even though it wasn’t explicitly searching for security vulnerabilities back then, it did flag suspicious constructs. Today there are a myriad of tools to choose from both open source and commercial. We’ll talk about things to consider when evaluating web application scanners then turn our attention to finding additional ways to aggregate and correlate data from other sources such as git logs, code complexity analyzers and even rosters of students who completed secure coding training in an attempt to build a predictive vulnerability model for any new application that comes along. We’re also looking for people to contribute to a new open source initiative called “The Bad Human Code Project.” The goal is to create a one-stop corpus of intentionally vulnerable code snippets in as many languages as possible.

John L. Whiteman is a web application security engineer at Oregon Health and Science University. He builds security tools and teaches a hands-on secure coding class to developers, researchers and anyone else interested in protecting data at the institution. He previously worked as a security researcher for Intel’s Open Source Technology Center. John recently completed a Master of Computer Science at Georgia Institute of Technology specializing in Interactive Intelligence. He loves talking with like-minded people who are interested in building the next generation of security controls using technologies such as machine learning and AI.
  • 1 participant
  • 28 minutes
bad
ai
behavior
sast
flaws
human
warnings
civil
americans
war
youtube image