►
From YouTube: CNCF Research End User Group Meeting (July 20, 2022)
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from April 17-21, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Cool
yes,
so
or
something
you
think
about
what
to
do
this
time,
because
we
obviously
didn't
have
a
external
speaker.
We've
got
a
few
things
on
the
topic
backlog
which
we're
going
to
try
and
get
booked
in,
but
I
was
just
thinking.
It
might
be
a
good
opportunity
just
for
the
few
of
us
here
anyway,
just
to
I,
don't
know,
get
a
bit
more
up
to
date
with
what
we're
all
doing
and
yeah
I
put.
A
Hopefully,
you
saw
I
put
three
three
bullet
points
in
the
external
slap
Channel,
just
to
spark
the
discussion
think
a
little
bit
about
what
we're
up
to
what
maybe
what
some
new
technologies
we've
discovered
in
the
last
few
months.
A
I've
got
a
few
things
and
if
there's
any
gaps
in
the
ecosystem
and
they'll
be
quite
interested
just
to
hear
what
you
guys
have
to
say,
I
can
make
some
some
notes
and
paste
it
into
the
the
document
later
and
then
it
might
help
us
think
a
little
bit
about
what
we
could
do
in
future
in
this
group
and
what
else
we
need
from
the
from
the
community.
So
I
will
start
what
I'm
working
on
at
the
moment,
we've
done
a
bit
of
reorging
inside
my
my
team.
A
Actually,
so
I've
now
got
a
couple
of
managers
working
for
me,
which
is
great.
So
this
is
a
not
specific
to
this
group
really,
but
just
in
terms
of
my
workload,
it's
helpful
someone's
ringing
me
go
away,
but
internally
in
gr.
What
we're
up
to
is.
Obviously
we
talk
quite
a
bit
about
our
Armada
system,
which
is
our
system
for
high
throughput
Computing
on
on
kubernetes.
We've
scaled
that
pretty
big.
A
Now
so
we've
got
some
thousands
of
nodes
running
in
one
of
our
data
centers
with
production
workloads
all
flowing
through
it.
So
the
big
thing
we're
doing
at
the
moment
is
basically
keeping
that
happy
and
well
and
then
looking
at
what
new
features
we
need
to
add
for
our
researchers
to
make
them
more
productive.
A
So
the
core
platform
is
working
well,
but
what
we
don't
really
have
a
good
story
for
at
the
moment
is
sensible
observability
for
users,
so
all
of
the
observability
tends
to
be
done
through
grafana
and
metrics,
so
which
is
quite
good
for
administrators,
but
also
good
for
users
who
just
want
to
understand.
A
What's
going
on
with
their
jobs
and
so
forth,
we've
got
some
basic
cli's,
but
I
think
we
really
need
to
invest
in
the
the
user
interface,
the
the
actual
UI,
so
that
people
can
click
around
and
understand
what
what's
going
on
in
this
great
big
machine
we've
built
for
them
in
terms
of
new
technologies.
We've
been
looking
at
the
last
few
months,
something
which
I've
We've
started
using
so
quite
good
effect
is
Envoy,
so
I
think
it's
actually
it's
a
cncf
project.
It's
something
that
I
think
istio
uses
internally,
but
we
just.
A
A
performant
very
configurable,
HTTP
proxy,
so
historically
for
those
sorts
of
things,
we've
actually
used
physical
appliances
almost,
which
are
quite
hard
to
operate
and
difficult
to
configure,
and
now
we
can
just
do
all
of
this
in
kubernetes,
using
using
Envoy,
which
is
really
really
powerful
and
yeah
easy
to
test
and
integration
test
and
deploy.
A
Changes
too
sounds
fantastic
and
in
terms
of
the
third
bullet
point,
so
what
gaps
I'm,
seeing
at
the
moment
we're
within
our
business
anyway,
we
I
still
feel
we're
really
missing
a
a
good
quality
cross
cluster
software-defined,
Network
or
software
defined
firewall.
So
we
want
the
ability
to
be
able
to
say
workload
a
you
can
talk
to
work
like
b,
or
this
type
of
thing
can
talk
to
that
kind
of
thing.
A
But
not
that
kind
of
thing
and
to
be
able
to
do
that
with
strongly
type
metadata
across
clusters
would
be
really
powerful
and
I
don't
feel
like.
We've
got
a
good
solution
for
that
out
there
at
the
moment.
There's
a
couple
of
products
we've
looked
at,
which
we've
sort
of
then
stopped
using
so
be
really
interested
to
hear.
If
anyone
has
a
any
good
good
options
for
that
yeah,
that's
that's
me.
I
guess
we'll
go
around
and
then
I'll
just
make
some
notes.
A
Then
we'll
just
chat
that
afterwards
so
I
don't
know,
Jeffrey
go.
C
Sure
and
I
apologize
again
for
the
construction
in
the
background,
so
right
now
at
or
now
the
for
the
all
groups-
leadership
computer
facility
right
now,
the
focus
of
all
efforts
is
on
xscale
and
Frontier
and
getting
that
into
a
state
where
we
can
get
the
early
science
users
on
there
in
order
to
be
able
to
actually
start
utilizing
that
resource
for
their
for
determining
results
for
their
research
for
the
Slate
service
running
alongside
of
our
compute
clusters,
our
our
supercomputers
that
work
is
probably
going
to
be
pretty
similar
to
what
we
already
have
in
place,
since
the
users
are
familiar
with
it,
but
until
until
the
early
science
users
actually
start
getting
on
and
I
I
can't
actually
start
testing
things.
C
So
we'll
see
how
we'll
see
when
that
happens.
As
far
as
how
that
testing
goes.
So
in
the
interim,
what
I've
been
working
on
lately,
more
being
an
open
shift
shop
is
working
with
the
advanced
cluster
management
pieces.
C
C
C
They
say
and
I
haven't
tried
this
yet,
but
they
say
that
you
can
also
manage
other
other
kubernetes
distributions
as
well
using
the
same
software
since
it's
just
basically
open
cluster
management
with
the
redhead
so
pieces
rolled
in
with
it.
It
looks
like
so
that's
that's
a
large
focus
of
what
I'm
working
on
at
the
moment
and
the
reason
why
I'm
working
on
that
my
Gap
is
people.
The
great
resignation
is
real
and
it's
hit
us
it's
been
as
people
are
adjusting
workflows
and
things
like
that.
C
C
A
Cool
I
made
some
notes:
I'll
I'll
talk
it
up
in
the
in
the
doc
later
and
you
can
correct
all
my
mistakes
or
typos.
D
A
Cool,
thank
you.
Yes,
who
should
we
go
for
next.
A
D
Okay,
yeah
I'm,
just
working
on
getting
containers
better
container
integration
with
slurm,
okay.
So
the
last
release
we
added
the
ability
to
call
oci,
runtime,
oci,
compliant
runtimes
and.
A
D
D
No,
no,
no,
we
have
all
the
ghost
of
the
slurm
is
reviewed
by
somebody
who
didn't
write
it
all.
D
But
hopefully
it'll
make
life
a
lot
easier
and
now
let
people
who
run
pod
men
and
dock
her
you
slurm
so
much
easier
or
F.
D
I
added
the
modifications
to
the
demon
that
runs
on
the
notes
to
actually
call
the
oci
runtimes
directly
and
I.
D
Admittedly,
that
was
incomplete
because
I
ran
out
of
time,
and
this
is
more
of
the
work
on
it
to
actually
make
it
friendly,
but
Slim's
actually
had
containers
forever.
It's
just
that
everyone
used
them
outside
of
slurm
or
they
use
a
plug-in
to
make
it
work.
B
A
D
D
Definitely
especially
since
they
don't
actually
use
the
oci
runtimes
in
a
compliant
way,
because
they
use
the
detached
mode
of
c-run,
which
is
it
even
in
the
spec
at
all.
But
I
got
it
just
takes
a
little
while
sitting
there
with
debugger
stalker
bye.
A
Are
people
generally
wanting
to
use,
docker
or
I
think
we're
obviously
seeing
a
lot
of
people
moving
away
from
it
with
he's
not
supporting
it
in
the
future?.
D
I
mean
well
I've,
seen
that,
although
Jeff
could
probably
get
better
word
on
this,
is
that
pod
man's
coming
in
pretty
popular
with
our
doe
crowd,
but
docker's
always
been
the
one
that
users
have
asked
for.
As
far
as
I'm
aware,
they
just
want
the
thing
to
work.
I
mean
that's
one
of
the
reasons
why
podman
you
could
just
do
as
an
alias
of
Docker
and
it
works.
A
D
C
E
D
Actually,
scilabs
is
working
on
that
as
we
speak.
Oh.
E
E
D
Yeah
scilab
posts
all
their
plans.
Online
I
got
the
final
link
for
it,
but
it's
part
of
their
plan
of
record
and.
A
You
just
reminded
me
as
well
completely
unrelated
but
the
same
name
but
there's
a
white
paper
or
something
that's
come
out
of
Microsoft
about
something
they've
created
called
singularity,
which
is
not
not
a
container
runtime,
but
it's
some
kind
of
large-scale
results.
Scheduler
I,
don't
know.
There's
anyone
heard
of
that.
D
A
Yeah
yeah,
it
looks
it
looked
quite
new,
it
was
just
a
paper,
it
had
about
20
authors
on
it.
It
was
massive,
it
was
quite
thick
and
it
seems
to
be
an
all
singing
dancing,
Source
scheduler,
equivalent
to
something
like
I
suppose,
but
with
the
hardware
as
well,
I,
don't
know
it
and
it
sounded
like
it
did
all
sorts
of
magic
things
around.
Oh.
A
Yeah
I'll
find
it
and
then
yeah
last
question,
then
Nate
I
don't
know
any
gaps
in
the
ecosystem
or
areas
that
you're
working
that
you
need
plugged
other
than
the
Docker
No.
D
But
you
know
it's
just
the
usual
process
as
well
as
well.
By
now,
what's
broken,
isn't
it
for
the
most
part,
the
oci
standards
are
really
helpful.
Most
people
follow
them
at
least
somewhat
my
the
most
amusing
part
about
the
oci
standards.
Is
they
actually
don't
standardize?
What
any
of
the
runtime
arguments
are?
A
All
right,
I'll
take
some
notes:
cheers
cool
thanks,
Timothy
questions
to
you.
Then.
E
Yeah
I
just
need
to
find
my
mute
button,
unmute
button
and
all
this
mess
yeah,
so
I've
been
working
through
the
system
on
kubernetes
capabilities,
former
researchers
and
are
on
an
rcd
professionals
perspective.
So
we
I've
done
fabric,
so
I
had
to
build
a
cube.
Con
kubernetes
cluster
using
Q,
Cube
admin
and
a
mixture
of
python
and
Cloud
init
jet
stream.
2
was
similar.
Anvil
has
a
a
kubernetes
cluster
small
one
using
Rancher
and
that's
another
U.S
National
system
about
a
thousand
nodes
previously
to
that.
E
I
had
played
with
the
Pacific
research
platform,
also
known
as
Nautilus,
soon
to
be
known
as
the
Ern,
which
used
to
be
called
the
eastern
regional
Network,
and
it's
a
distributed.
Kubernetes
cluster
for
researchers
with
gpus
and
so
I've
just
been
looking
at
it
from
you
know.
How
easy
would
it
be
for
somebody
who
wanted
to
as
a
researcher
or
somebody
supporting
researchers
to
leverage
these
Technologies?
E
So
that's
kind
of
what
I've
been
up
to
doing
and
just
evangelizing
the
use
of
kubernetes
as
a
as
a
way
to
kind
of
abstract
use
from
the
public
cloud,
and
things
like
that,
so
yeah.
C
E
For
PRP
and
Nautilus
they
are
across
the
country.
They
have
like
they're
doing
ceph
across
hundreds
of
miles.
E
They
have
I,
have
500,
gpus
and
I'm
guessing
they
don't
have
a
few
tens
on
each
location,
so
they
may
have
I.
Don't
I
should
look
at
what
it
is,
but
they
may
have
like
20
or
30
different
locations.
E
So
it's
it's
rather
interesting
same
with
fabric
fabric
is
an
experimental
Network,
and
so
you
can
get
nodes
across
the
country
that
are
connected
via
100,
Gig
plus
networks,
and
you
say:
I
wanna,
I,
wanna
I
want
to
a
card
all
to
myself
and
so
I
built
a
kubernetes
cluster.
On
top
of
that,.
E
In
terms
of
new
technology,
I've
been
playing
with
Cuba,
Minton
and
cloudinet
systems
recently
and
in
my
beer
time
at
home,
I've
built
a
pi
kubernetes
cluster
from
scratch,
and
so
my
constraints
on
that.
Just
because
it's
fun
was
to
to
build
it
only
using
a
Docker
container
and
it's
container
D
and
IPv6.
Only
so
I
can
I
can
build
it
and
tear
it
down.
Anytime
I
want-
and
it's
only
starts
with
the
docker
container
and
right
now
on
the
I've,
got
it
up
and
running
and
now
I'm
trying
to
make
it
self-hosting.
E
I
have
I
want
to
get
more,
but
I
have
a
controller,
node
and
two
worker
nodes
and
then
I
have
a
provisioning
node
as
well,
so
the
provisioner
runs
Docker
and
and
runs
the
runs.
The
pixie
Boot
and
all
that
on
top
of
it
and
the
plan
is,
is
once
it
gets.
Bootstrapped
is
that
the
cluster
takes
over
that
capability.
A
E
Grab
up
here
and
sit
in
front
of
the
TV
and
listen
to
music
and
pound
away.
So
you
know
it's
the
opportunity
to
do
something
slow
and
do
it
right
and
I
haven't
played
with
seriously
played
with
Linux
for
a
number
of
years.
So
just
relearning
things
like
system,
D
and
network
D
and
all
those
kind
of
neat
things
from
a
deep
perspective.
So.
A
A
E
Pretty
capable
and
it's
a
I
think
it
would
be
a
great
tool
for
assisted
men's
trying
to
learn
how
to
deploy
a
kubernetes
cluster
because
it
they're
capable
enough
and
fast
enough
to
do
that.
Yeah.
A
E
That's
cool.
My
next
project
is
to
watch
the
RPI
locator
to
see
when
they
come
for
sale
and
put
alarm
off
and
Flash
some
light.
So
I
know
when
to
go
buy
one.
So.
A
Yeah
and
then
so
yeah,
the
final
question
around
the
the
gaps
in
the
ecosystem
or
anything
you're.
Looking.
E
Natural
systems
don't
have
the
ability
to
kind
of
click
and
deploy
as
a
user
to
create
a
kubernetes
cluster
you'd
have
to
build
it
you're
on
your
own
and
I
think
the
researchers
would
do
well
to
to
be
able
to
do
that
kind
of
thing,
and
then,
just
from
my
my
experience
and
seeing
how
things
are
done,
I
think
a
simple
provisioning
system
would
be
nice.
I
mean
you
don't
need
some
of
these
really
heavy
provisioners
to
build
a
cluster
in
terms
of
I.
E
Can't
Foreman
is
that
yeah
and
some
of
the
other
ones
it'd
be
nice
to
just
have
the
simply
pixie
boot
in
and
bring
up
a
a
node
for
kubernetes.
A
Yeah
I
think
that's
always
a
challenge.
We've
made
it
as
easy
as
possible
in
our
environment
to
build
clusters,
but
it's
still
not
a
one-click
thing.
I
guess
I,
don't
know
about
you
guys,
but
we
don't
tend
to
because
we
don't
give
like
cluster
as
a
service
to
people
where
you
have
like
name
spaces
on
shared
clusters.
We
haven't
really
had
to
completely
streamline
that
building
a
cluster
process.
If
you
recall
it
was
here,
he
might
have
a
view
because
I
know
they
do
do
that
sort
of
model
of
spitting
out
whole
clusters
for
people.
E
This
is
a
part
of
my
early
exploration
for
beer.
Time
made
me
realize
that
they're
all
pretty
heavy
weight,
I
didn't,
as
you
know,
deploying
what's
the
openstack
provisioner
seemed
rather
overkill
for
for
doing
a
kubernetes
cluster
yeah.
E
I,
don't
remember
what
it
is,
so
you
know
if
that
was
my
day.
Job
I
would
I
would
have
reservations
for
that
and
that
for
beer
time
it
definitely
was
not
worth
going
down
that
route,
because
I
see
what
I
took
my
team
to
to
to
maintain
an
open,
stack
and
built,
make
build
and
maintain
an
openstack
cluster.
A
B
A
Well,
thanks
for
going
yeah,
yeah
I,
don't
know
if
you
saw
my
notes
on
the
on
the
slack
Channel
around
what
we're
doing.
No,
that's
fine
I
can
Ambush
you,
then
so
we
haven't
got
an
external
speaker
this
time.
So
we're
just
doing
some
open
discussion
around
around
a
few
things,
but
it'll
be
interesting
just
to
everyone
to
update
each
other
on
what
we're
working
on
any
new
technologies.
B
No
gaps,
no,
it's
all
fine.
B
The
last
part
of
my
last
week
has
been
organizing
the
batch
working
group,
the
cncf
batch
working
group
stuff
and
doing
a
bunch
of
Outreach
to
try
and
collect
as
many
people
to
that
conversation
as
possible.
B
So
Nathan
you
were
in
there
already
I
think
so.
Hi
Jeff
and
Tim
hey,
there's
a
conversation
going
on
that
came
out
of
the
discussion
with
the
tag
runtime
Group,
whatever
the
gold,
where
we
were
asked
to
spin
up
a
conversation,
a
working
group
around
batch
at
the
cncf
level,
there's
already
a
conversation
going
on
at
the
kubernetes
level
for
batch.
B
But
there
was
a
sense
that
we
wanted
to
have
a
conversation
at
a
higher
level
and
discuss
how
in
particular,
all
the
sort
of
projects
that,
like
Armada
and
volcano
and
unicorn
and
mcad
and
like
there's
a
long
list
of
slurm
and
Condor,
and
how
all
of
those
things
interact
with
kubernetes
that
we
thought
there
was
a
discussion
to
be
had
there.
So
I'm
helping
run
that
working
group
discussion
and
just
trying
to
get
a
hold
of
it
and
try
and
gather
interested
parties
towards
it.
B
So
that's
one
thing
that
I've
been
doing
that
might
be
interesting
to
This
Crew.
Here.
A
Getting
hold
of
the
folks
over
sort
of
Asia
way,
so
a
lot
of
people,
volcano
and
Klaus
and
others
in
China
you
finding
it
the
right
contact
to
them.
I.
B
Mean
I
I've
been
okay
sort
of
asynchronously
chatting
with
Klaus
I'll.
Ask
him
for
access
to
the
working
group,
Google
group
and
he'll
eight
hours
later,
give
it
to
me,
and
but
we
haven't
had
many
people
who
actually
join
and
we
had
started
by
having
the
meeting
at
something
like
seven
o'clock
our
time
so
that
it
was
10
o'clock
their
time
or
something
like
that.
B
But
it
turned
out
that
they
they
would
never
actually
join
anyway.
So
I
think
we're
probably
gonna
have
to
do
something
like
what
Cloudera
does
with
ozone,
where
we
just
have
a
Western
countries
meeting
on
one
hand
and
then
an
APAC
meeting
at
some
other
point,
but
I
think
that
what
I
need
to
do
first
is
to
just
get
a
critical
mass
of
people
in
one
of
them
and
then
I'll,
try
and
populate
the
other.
B
E
Yeah
from
a
researcher's
perspective
that
that
would
be
a
huge
capability
for
the
adoption
of
kubernetes
in
in
a
wide
variety
of
areas.
B
So
that
might
be
an
interesting
conversation
for
you
as
well,
if
you're
interested
in
low-level
kubernetes
stuff,
but
assuming
that
things
get
fixed
at
that
level,
it
probably
implies
changes
to
everybody
else
who
have
built
things
at
a
higher
level.
What
are
those
implications?
How
does
it
change
our
worlds?
How
does
it
enable
all
of
the
rest
of
everyone
to
actually
do
batch
on
kubernetes?
That's
what
the
cncf
conversation
wants
to
look
at.
A
Yeah
in
terms
of
people
are
not
going
so
far
down
the
roads,
they're
on
that
it's
difficult
for
them
to
wind
back
so
using
something
fundamental
when
it,
even
if
it
ever
does
exist,.
B
Yeah
I
mean
we
may
be
in
that
situation.
B
You
know
there's
and
there's
a
real
sense
that,
after
your
team,
Jamie
Gets
years
worth
of
experience
running
a
system
that
can
now
have
that
as
a
real
skill,
and
you
won't
want
to
change
to
something
else.
Just
because
there's
a
that'll
be
an
upheaval
and
yeah
we're
running
lots
of
stuff
on
it
already.
So
you
know
who
knows
what
will
happen
when
we
get
to
that
point?.
A
B
B
Exactly
but
for
us,
maybe
it
just
becomes
a
meta
scheduler
on
top
of
kubernetes
jobs,
API
and
that
it's
as
simple
as
that
kind
of
thing
you
know,
because
until
cubefed
is
a
thing
that
you
can
actually
Federate
multiple
kubernetes
clusters,
also
natively
in
kubernetes
you'd
still
need
something
like
what
Armada
provides,
even
if
below
that
the
scheduling
is
actually
done
by
kubernetes,
as
it
should
be
yeah
yeah
yeah
indeed,
so
anyway,
that's
the
that's
the
exactly
the
kind
of
question
that
we
hope
to
entertain
in
the
cncf
batch
working
group.
B
So
that's
one
large
thing
that
we've
been
working
on
the
teams
that
I
have
on
in.
They
are
still
working
on
all
sorts
of
projects.
You
know
armada's
an
Ever,
increasingly
big
project
of
ours,
but
we
also
do
a
bunch
of
work
in
directly
on
ML
and
data
science
tools.
B
You
know:
spark
caravard,
Ray
Arrow.
All
of
that
kind
of
universe,
like
GBM
I,
don't
know
you
could
go
down
the
list.
Are
there
any
particular
open
source
tools
that
all
of
you
or
all
of
your
research
teams,
use
currently
I'm
just
curious,
whether
any
of
them
are
ones
that
we're
contributing
to.
B
B
Suppose
we've
been
doing
one
project
in
kubernetes
security,
looking
at
username
spacing
and
giving
researchers
root
access
in
a
secure
way,
and
it
the
person
that
I
have
working
on
that
has
some
pretty
promising
results
like
can
confirm
that
he
can
do
all
sorts
of
things
safely.
You
know
that
with
some
caveats,
also
part
of
that
project
is
looking
at
various
ebpf
things
around
psyllium
and
Maybe
tetragon.
B
So
there's
that's.
That's
also
work
that
we're
doing
what
other
things.
If
what
have
I
played
with
recently.
B
You
know
I've
been
looking
at
one
of
the
things
that
I've
been
wrestling
with
is
data
science
and
machine
learning.
The
ecosystem
is
is
really
confusing.
There's
either
you
have
to
sort
of
Cobble
it
all
together
yourself
and
really
understand
how
each
part
works
and
how
each
part
will
interact
with
each
other
or
you
can
go
with
these
One-Stop
shop.
B
All-In-One
Solutions,
where
just
use
our
platform
and
you
will
have
notebooks
through
to
production
model
serving
and
there
doesn't
seem
to
be
much
in
between
and
so
I've
been
trying
to
go
through
the
millions
of
data
science
platform
offerings
that
are
currently
out
there
and
trying
to
figure
out
which,
which
ones
offer
which
parts
and
which
things
we
actually
want,
as
G
research
and
just
trying
to
get
viewpoints
on
that
you
know,
I
was
looking
at
pretty
base,
for
example,
which
has
some
nice
pieces
to
it.
B
It's
that's
from
the
guy
who
did
horovod
and
Ludwig
Ai,
and
it
builds
on
top
of
Ludwig
and
there's
some
cool
things
about.
It
might
be
cool
for
our
NLP
people,
because
it
packages
up
hugging,
face
models
which
currently
our
researchers
have
to
email
to
themselves
to
get
inside
g-research
and
then
the
email
fails,
because
the
models
are
too
big
and
then
they
have
to
go
through
the
whole
Pro.
It's
a
terrible,
terrible
thing.
B
So
you
know
I
was
looking
for
a
solution
for
them
there,
but
that
it's
a
platform
that
includes
a
whole
bunch
of
other
things
that
we
already
do
well
inside
G
research.
So
they
wouldn't
need
those
things.
So
for
us
we
want
sort
of
this
composable
data
science,
ecosystem.
B
C
A
A
Right,
so
you
can
only
reuse
it
like
a
One-Stop
shop
if
you're
coming
starting
from
a
sort
of,
if
you're
at
a
starting
point,
and
then
you
use
it
and
build
around
it
like
if
you
have
any
kind
of
pre-existing
infrastructure
or
software
or
anything,
and
then
it's
difficult
to
then
just
go.
I'll
just
use
this
One-Stop
shop
because.
B
B
So
I
don't
know,
you
know,
nobody's
really
sold
this
the
ecosystem.
It
feels
like
this
particular
solar
system
of
products
is,
you
know,
still
in
the
sort
of
large
bodies
of
gas
forming.
You
know
bits
of
rocks
colliding
into
each
other
and
eventually
there'll
be
planets
that
you
can
visit,
but
yeah
nothing's
really
really
come
of
it.
Yet
so.
B
So
anyways
I
mean
I
suppose
that
answers
question
two
and
three
yeah,
like
it's
kind
of
what
I've
been
working
on,
is
looking
around
at
the
hole
in
my
heart
of
data
science
and
yeah.
B
A
B
Yeah,
hey
Dave
you're
on
there
too
I
miss
your
your
update.
B
I
mean
I
can
tell
you
what
Dave's
working
on
he's
got:
Armada
team
valiantly
working
away
on
Armada,
oh
yeah,
we've
also
got
the
ozone
team
I'm
doing
snapshotting
for
Apache
ozone
and
some
other
pieces
too
in
Ozone
land
and
then
a
bunch
of
things
in
F.
Sharp
and
C
sharp,
develop
a
productivity
land
just
trying
to
improve
build
times
and
nuget
restores
and
all
sorts
of
evil
horrible
things
in
the
Microsoft
ecosystem,
so
yeah
good
stuff
over
there.
But.
A
Sounds
like
fun,
yeah
cool
all
right.
Well,
thank
you.
I
think!
That's
we've
been
through
everyone.
I've
been
typing
up,
some
notes
very
amateurishly
in
notepad,
so
I'll
try
and
take
all
the
tomatoes
out
and
pop
it
into
the
Google
Doc
and
then,
if
you
want
to
go
back
and
look
and
fix
anything
that
I
get
wrong,
that'd
be
cool
other
than
that
I
think
we've
got
so.
A
This
next
session
will
be
in
two
weeks,
looking
at
the
calendar,
so
yeah
third
of
August
I'll
chat
with
Ricardo
I,
think
he's
backed
by
then
I'll
be
around
as
well,
so
we'll
see
if
we
can
get
I'd
really
like
to
have
that
silly
mini
VPS
session.
A
We
do
have
someone
lined
up
to
talk
so
we'll
try
and
get
that
sorted
and
take
it
from
there.
That's.
B
A
Yeah,
it
will
be
pretty
anti-sexual
for
him.
Yeah.
B
B
Here's
a
little
kid,
maybe
he'll
be
up
who
knows
so
anyway,
but
he's
the
he's
the
person
who's
working
on
the
username
spacing
and
EPF
stuff.
So.
A
All
right,
that's
from
Nate
or
Tim
nope.