►
From YouTube: CNCF SIG Runtime 2020-09-03
Description
CNCF SIG Runtime 2020-09-03
A
A
B
D
D
C
Third
third
thurs,
first
thursday
of
each
month
at
11,
I'm
sorry,
8
00
am
usa
pacific,
which
is
11
a.m
here,
which
is
one
minute
from
now.
C
Yeah,
I
suppose
they
could
have
heard
who
was
going
to
be
talking
and
said.
Oh
no,
I'm
not
showing
up.
B
F
B
C
I
got
worried
and
but
eventually
just
trying
to
sign
up
and
no
it's
just
there.
B
Is
this
your
first
time
here
margaret?
I
don't
know
if,
if
this
is
usual
to
wait
or
we
got
the
right
day
and
time
right.
C
F
Something
okay!
So
while
we
wait,
I'm
going
to
try
and
share
my
screen
just
to
like
check
if
it
works,
because
I
use
zoom
inside
a
container
and
I
don't
know
how
it
behaves
so
sure.
Okay,.
G
Yeah
there
we
go.
F
F
C
Is
a
little
off,
but
it's
a
little
tall
for
what
it
is.
But
it's
readable.
F
C
G
C
H
Okay,
yeah
welcome
everyone
glad
to
have
you
here.
We
have
about
two
items
on
the
agenda
today,
so
I'm
gonna
send
the
meeting,
notes
or
put
the
video
notes
on
the
chat.
Let
me
give
me
a
second.
I
H
So
that
you
guys
you
all
at
yourself
as
an
attendee
there
you
go
and
yeah.
We
have
two
items
today
and
one
is
the
talos
and
the
other
item
is
flag,
cart,
so
two
projects
that
are
in
a
similar
space
or
they're
kind
of
similar.
But
you
know
glad
to
have
you
here
and
you
know,
talk
about
how
different
they
are
or
how
similar
they
are.
C
Here,
all
right,
so
I
am
sean
cord
from
cycloor
systems.
We
are
a
boutique
consultancy
in
a
number
of
different
specialized
areas,
including
group
fps,.
C
Talk
a
little
bit
more
about
one
of
the
dreams
of
kubernetes,
that
is
to
abstract
away
the
machine
abstracting
a
lot
in
kubernetes,
but
at
least
for
the
purposes
of
talos
we're
looking
at
whenever
we
run
kubernetes
all
of
our
operations,
our
focus
our
attention
as
users
should
be
at
the
cluster
level
and
not
the
machine
level.
We
shouldn't
care
about
the
discrete
resources
on
which
the
cluster
is
built.
C
It
is
of
course,
suitable
for
cloud
use,
but
also
being
built
with
a
keen
targeting
for
the
bare
metal
user.
A
number
of
design
features
have
guided
the
development
of
talos
based
on
our
experiences,
and
we
continue
to
refine
these
in
order
to
create
the
easiest,
most
manageable
container
operating
system
in
the
container
ecosystem.
C
Yeah
we
tried
that
before
and
it
it
ends
up
cutting
off
the
slides.
It
has
something
to
do.
I
think,
with
the
fact
that
I'm
running
zoom
out
of
a
web
browser.
H
C
Which,
unfortunately,
I
have
to
do
because
I'm
on
weyland
and
weyland
has
the
screen
share
problem
so
low?
We
have
no
way
there
may
be
a
way
to
maximize
it
within
the
window.
I
don't
know,
but
that
might
help
I
think,
a
little
bit
better.
C
D
C
Yeah,
however,
one
does
that
I
think
I
just
got
rid
of
the
menu
somehow
there
we
go.
You
nope,
that's
ruler,.
H
C
C
Sure,
okay,
so
telus
is
highly
focused,
specifically
to
run
kubernetes
in
container
oriented
workloads.
Therefore,
we
wanted
to
avoid
using
a
generic
general
purpose
os.
This
allows
us
to
limit
the
attack
service.
We
have
a
tiny
set
of
tools
and
almost
no
listening
services.
We
have
no
common
packages
and
we
install
no
administrative
tools
to
exploit.
C
C
C
C
Telus
is
designed
to
offer
no
unstructured
access.
Everything
comes
through
the
api,
even
for
internal
components.
They
run
they
talk
to
each
other
through
the
api.
We
have
no
cheating
in
the
system.
At
any
level.
We
have
no
shell,
we
have
no
ssh.
We
include
a
cluster-wide
pki
with
automatically
rotating
certs
and
short-lived
ephemeral
certs
wherever
possible.
C
We
are
designed
to
minimize
the
maintenance
overhead
at
every
level.
Obviously,
of
course,
the
api
based
lifecycle
controls
help
that
a
lot
offering
such
things
as
deployment,
reboot
control,
reset
or
wiping
the
node
and
upgrades
all
via
api,
but
we
also
offer
metrics
monitoring,
debugging
and
a
number
of
common
unix
unix,
like
utilities,
to
be
able
to
diagnose
and
work
with
any
problems
that
might
arise.
C
Kubernetes
deployment
intellis
is
handled
by
a
small,
manifest
requiring
only
a
bare
minimum
of
required
customizations
to
get
bootstrapped.
Collis
is
also
a
certified
kubernetes
installer.
In
fact,
the
os
is
the
installer
telus
is
built
with
aggressive
automation
of
kubernetes
in
mind.
We
have
structured
customization
for
all
kubernetes
common
control
pump
components
and
we
manage
upgrades
for
both
the
kubelet
and
the
control
plane.
C
C
C
We've
tried
to
abstract
away
the
unix
primitives,
for
instance.
Instead
of
ps,
we
have
a
process
list.
Instead
of
ls,
we
have
list
files
instead
of
cat,
we
have
read,
we
still
have
aliases
to
handle
from
the
cli
tool
the
unix
style
commands,
but
in
general,
we've
tried
to
make
this
accessible
across
the
board
to
as
many
people
as
possible,
with
no
discrete
knowledge
of
unix
background.
C
C
C
I
H
C
C
Okay
yeah,
likewise,
okay,
at
any
rate,
do
feel
free
to
stop
me.
I
don't
have
much
more
here,
but
stop
me
if
you
do
any
questions
we'll
have
time
at
the
end,
so
we're
building
an
ep
eppf
system
which
allows
which
will
allow
talos
to
be
an
invented
os
based
on
reactive
changes
to
system
updates.
C
So
when
we
add
a
network
link,
when
we
add
a
block
device,
remove
block
device,
etc,
we
can
quickly
and
atomically
react
at
the
talos
level
to
these
changes,
we're
building
capi
providers
for
a
number
of
cloud
providers
and
bare
metal
in
fact,
bare
metal.
We
have
an
entire
management
system
being
built
somewhat
along
the
lines
of
drp
or
matchbox,
but
a
lot
more
and
a
lot
more
specific,
specifically
catered
to
kubernetes.
C
C
10
minutes
or
so
15,
maybe
okay,
here
we
are
share
screen.
C
C
C
C
C
C
All
right
so
now
that
we
have
our
telus
config
setup,
I
should
explain
what
we're
doing
we're
setting
the
endpoints
for
the
palace
api
and
we're
setting
each
of
those
to
talk
to
the
master
control,
plane,
nodes.
C
C
C
C
C
A
H
So
a
question
management
management
can
happen
anywhere
right.
It
can
be
on
your
laptop
or
can
be
on.
C
Yes,
in
this
case,
I'm
doing
the
management
from
my
workstation
here
at
home
and
the
we're
working
on
machines
up
in
digital
ocean.
H
H
C
Yes,
so
the
bootstrap
only
occurs
on
a
single
server,
but
it
will
bootstrap
a
high
availability
control
plane
from
that,
so
it
starts
with
one
and
then
it'll
build
the
rest
of
the
control
plane.
After
that,
one
is
up
and
running
and
telus
handles
that
automatically
the
way
it
determines
which
nodes
to
use
is
based
on
the
configurations
that
we
applied
to
those
vms.
C
C
So
that
was
back
with
our
just
with
the
droplet
create
we
specified
the
user
data
as
being
one
of
those
two
files
to
say
which
type
of
node
is
created
when.
C
Yes,
it
should
work
with
all
cloud
provi
all
the
major
cloud
providers,
including
packet,
aws,
gcp,
azure,
digitalocean
andrew.
Let
me
know
if
I'm
missing
any,
but
I
think.
C
C
Cool
all
right,
any
other
questions
before
we
see
if
we're
up
yet
okay,
we
are
up
so
good
kube,
as
we
see
has
so.
We
should
be
able
to
grab
our
caffeis.
C
C
C
All
right-
and
that
is
pretty
much
the
demo
I'll,
be
I'm
happy
to
open
for
questions.
H
It
does
yeah,
I
have
another
question:
does
it
manage
the
life
cycle
of
kubernetes
or
for
kubernetes
are
great
and
yeah.
C
Yeah,
so
the
kubelet
is
is
bound,
is
presently
bound
to
a
default
of
the
telus
version.
That's
installed,
so
we
when
we
upgrade
the
telus
nodes,
it
will
also
upgrade
the
the
kubelet
that's
bound
to
that.
That
is
also
independently
controllable
by
the
telus
config.
C
So
if
you
want
to
bind
a
particular
version
you
can
by
modifying
the
config
of
the
system,
likewise
each
of
the
control
plane
elements
are
controllable
by
the
config,
so
I
shouldn't
say
each
we
currently,
I
just
have
so
I
shouldn't
say
currently
either
so
there
are
different
levels,
we're
in
a
little
bit
of
flux.
So
presently,
if
you
were
to
install
a
0.6,
which
is
our
current
stable
release
version,
the
control
plane
is
actually
self-hosted
and
self-managed.
C
So
there's
no
automatic
updating
of
the
control
plane
components
you
can
edit
those
manually
in
our
next
versions.
We
will
have
that
managed.
We
will
actually
be
backing
off
from
boot.
Coupe
entirely
been
running
all
of
the
control
plane,
components
from
talos
directly
and
at
that
point
we'll
be
able
to
structurally
control
the
versions
of
each
of
the
control,
plane,
components.
H
C
So
that
is
automated.
It
will
hook
in
to
the
existing
kubernetes
existing
drain,
hooks
so
drain
clearing
hooks
and
and
upgrade
those
when
it's
time
we're
looking
for
some
for
building
some
more
advanced
controls
in
that
so
we'd
like,
for
instance,
to
have
a
plug-in
system,
some
some
add-ons
available
to
talos
itself
and
for
those
we'll
be
using
event
hooks
internal
to
talos,
to
be
able
to
signal
when
we
can
perform
the
upgrades
on
any
individual
nodes.
H
K
This
is
eric.
I
just
had
a
question
about
some
of
the.
I
guess
networking
projects-
sorry
I
might
get
in
the
background.
I
was
just
wondering
if,
if
you
wanted
to
use
something
else,
because
I
think
you
guys
are
in
this
example-
we're
using
flannel.
But
if
you
wanted
to
use
something
like
cilium,
would
that
be
possible?
Yep.
C
Absolutely,
as
a
matter
of
fact,
whedon
normally
sorry
andrew
did
you
want
to
answer
that?
Okay,
it's
split
over
sorry.
Yes,
we
absolutely
support
cilium.
In
fact,
many
of
us
use
psyllium
as
the
default.
It
is
just
a
line
in
the
default
config.
Let
me
see
if
I
can
show
that
to
you.
C
E
C
So
you
set
the
cni
to
blank
and
then
you
can
use
whatever
you
want,
whether
it
be
psyllium
or
calico
or
denim
or
any
of
the
available
cni's.
E
And
to
be
clear,
that'll
probably
change
a
little
bit
in
the
longer
term.
The
reason
it's
it's
like
that
now
has
essentially
been
forced
upon
us
because
of
boot
cube
right.
So,
as
we
move
away
from
boot,
cube
we'll
probably
actually
move
towards
psyllium
as
our
default.
I
would
imagine
I
mean
I
think
we
all
pretty
much,
never
use
flannel
anyway.
So
right.
C
C
Especially
not
me
since
flannel
does
not
support
ipv6
at
all,
which
is
a
good
feature
I
should
mention.
Talos
is
ipv6
clean.
You
can
run
a
pure
ipv6
system
for
any
of
those
who
care.
C
H
And
one
last
question:
are
there
any
plans
for
making
this
or
applying
for
a
ncf
project,
our
being
part
of
the
ecosystem
of
the
cscf
projects.
B
I
could
I
could
try
to
answer
that
one
sean,
so
I
think
early
on
we
did
reach
out.
I
think
there
was
some
questions
on
whether
or
not
accepting
an
operating
system.
I
mean
something
entirely
new,
you
know,
you
know,
accepting
an
operating
system
into
cncf
was
new
at
the
time
we
are
a
little
a
bit
of
you
know,
as
you
can
tell
we're
unique.
So
it's
it's
yeah.
The
line
is
fuzzy.
I
think
in
the
long
run
we
would
like
to.
B
We
don't
have
any
immediate
plans,
it's
just
figuring
out.
What
can
we
contribute
to
cncf
and
one
of
the
places?
I
think
that
we
could
is
with
this
cozy
project,
which
is
something
that
I'd
love
to
work
on
with
other
similar
operating
systems
and
figure
out
how
we
can
get
this
into
the
cncf
as
a
project
similar
to
how
container
d
is
a
container
runtime
interface.
We
want
to
provide
an
interface
for
interacting
with
the
operating
system,
but
yeah
to
be
determined.
H
Yeah
yeah,
I
think
the
toc
is
pretty
open
about
different
kinds
of
projects,
especially
we
have
sandbox
now
and
some
projects
that
may
not
necessarily
fit
the
usual
criteria.
H
For
example,
the
cncf
just
accepted
into
sandbox
k3s,
which
is
a
kubernetes
distribution
so
for
a
kubernetes
flavor,
so
yeah
so,
and
I
think
this
ncf
wants
the
sandbox
to
be
kind
of
like
a
sort
of
a
place
where
products
can't
develop.
H
H
All
right,
thank
you
very
much.
That
was
a
very
complete
and
thorough
presentation.
F
H
F
All
right,
awesome,
so
yeah
so
I'll
give
a
brief
introduction
to
flat
car
and
we
don't
have
a
demo
prepared,
but
I
guess
as
it's
based
on
coreos
and
it's
not
such
a,
I
don't
know
a
special
thing.
Maybe
it's
not
necessary
to
have
a
demo
so
flat
car
is
developed
by
kim
folk.
So
first,
who
is
skinfall
kim
folk,
is
a
company
that
exists
since
2015.
F
So
we've
been
around
for
five
years.
We
do
a
lot
of
things
related
to
linux,
containers,
security
and
we
do
open
source
engineering
with
many
different
clients.
We
have
a
couple
of
products
flatcar,
which
is
the
one
that
we
are
here
to
talk
about
today,
is
a
linux
distro
derived
from
chorus,
and
we
also
have
a
kubernetes
distro
called
locomotive,
and
on
top
of
these
two
products
we
also
do
other
consulting
development
like
rocket
and
go
bpf
were
developed
by
by
kinfolk
engineers.
F
So
what
is
a
container
linux?
This
is
pretty
basic,
and
I
guess
many
of
you
know
this
already.
These
are
the
bases
where
that
core
s
was
built
upon
so
container.
Linux
means
that
it's
a
minimal
distribution
that
only
has
what's
needed
for
running
containers
on
top
of
it,
so
it
doesn't
have
everything
that
a
general
purpose
distribution
has.
It
only
has
like
the
minimum
base,
so
there's
less
software
to
manage.
There's
a
reduced
attack,
surface
area.
Of
course
it's
not.
F
J
F
This
is
some
stupid
bug
that
my
soom
has
I'll
try
again.
F
Okay,
sorry,
I
don't
know
this
happened
already.
I
think
my
my
my
zoom
setup,
that
is
super
paranoid,
doesn't
like
it
and
it
it
sometimes
stops
redrawing
the
screen
or
something
let's
see.
Does
it
show
what
is
a
container
linux
now.
F
Okay,
so
container
linux.
It
means
that
we
have
a
reduced
attack
area,
surface
area.
Of
course
it's
not
as
small
as
status,
but
it's
much
smaller
than
a
typical
general
purpose
distribution.
F
If
the
file
system
is
immutable,
particularly
the
slash
user,
part
of
the
file
system
is
immutable.
So
that
means
that
there's
also
less
attack
surface
there.
Etsy
is
still
mutable,
so
configurations
are
still
possible,
but
it
reduces
the
amount
of
bugs
and
security
threats
and
it
has
automated
updates.
F
So
whenever
there's
a
new
version,
it
just
gets
applied
and
works,
and
it
rolls
back
automatically
if
it
fails
to
boot
so
like
if
the
machine
boots
and
tries
to
boot
and
fails
it,
it
gets
automatically
rolled
back
next,
all
right,
so
we
mentioned
this
already,
but
just
so
that
it's
super
clear
flat
car
is
based
on
coreos,
which
itself
is
based
on
chrome
os,
which
is
based
on
gentoo.
So
we
have
like
all
of
this
history,
but
right
now
we
are
going
on
our
own,
so
chorus
doesn't
exist
anymore.
F
It
reached
its
end
of
life
and
while
before
we
were
tracking
chorus
and
everything
it
did
now,
we
are
going
on
on
our
own
developing
our
own
new
features
and
so
on.
F
Next,
if
you
don't
know
where
the
name
flat
car
comes
from,
it
comes
from
a
train
metaphor
or
maybe
not
metaphor
female.
It's
the
kind
of
train
that
carries
containers.
So
it's
that's
why
it's
a
flat
car
container
linux
next,
all
right.
So
how
is
flat
car
structured?
F
It
has
four
channels
so
core
os
had
three
channels,
alphabet
unstable
and
we
keep
those,
and
we
also
have
an
experimental,
channel
or
labs
channel
where
we
try
out
things,
and
maybe
we
decide
they
are
a
good
idea
or
maybe
not
this
alphabet
unstable
channels.
We
we
introduced
the
new
changes
in
alpha
and
after
we
are
happy
with
those
changes
they
get
promoted
to
beta
and
once
they've
been
in
beta
long
enough
that
we
think
it's
stable
and
they
get
promoted
to
stable.
F
F
We
have
images
available
in
all
major
cloud
providers
and
some
minor
cloud
providers
as
well,
and
they
are
also
publicly
available
to
download
and
we
have
images
for
a
lot
of
different
type
of
machines,
and
we
have
a
public
update
server
that
anybody
running
flat
card
can
use
to
get
updates
to
the
latest
version,
and
that's
there's
more
on
that
on
the
next
slide.
F
So
the
kinfolk
update
servers,
it's
a
completely
open
source
project,
it's
code
name
nebraska,
and
we
are
running
one
community
instance
that
can
be
used
by
anybody
running
flat
car
and
also
people
that
want
to
have
more
control
and
want
to
run
their
own
version
can
run
it
on-prem
or
hosted
by
us,
and
this
update
service
allows
operators
to
decide
when
and
how
they
update
their
machines.
F
Okay,
so
what's
our
current
status,
we
have
a
team
at
kinfolk
that
is
dedicated
to
this
project
and
we
are
keeping
it
alive.
We
have
build
and
test
infrastructure
a
lot
of
integration
test
that
makes
sure
that
it
runs
correctly
in
many
different
cloud
providers
and
a
lot
of
that
is
thanks
to
packet.
We
have
sponsored
infrastructure
by
packet
which
helps
us
a
lot.
We
have
all
channels
maintain
completely
independent
from
core
os
and
we
have
support
infrastructure.
F
We
we
have
already
a
bunch
of
large
enterprise
customers,
a
few
thousand
hosts,
so
it's
it
has
been
growing,
there's
a
graph
coming
up
in
a
couple
of
slides
and-
and
we
are
happy
that
a
lot
of
people
are
adopting
flag,
car
and
yeah.
As
I
mentioned,
it's
integrated
into
lots
of
cloud
platforms
and
coordinated
distros.
So
in
the
next
slide,
we
have
a
bunch
of
logos
of
supported
platforms.
We
are
working
on
the
cluster
api
integration.
It
should
be
ready
in
the
near
future.
F
We
are
also
working
in
integration
with
rancher,
but
we
so
we
care
not
only
about
getting
integrated
as
a
base
os
on
the
different
platforms,
but
also
about
being
integrated
into
into
things
like,
like
rancher
or
kubernetes,
like
integrating
with
the
whole
ecosystem
and
the
next
slide.
We
have
a
pretty
graph,
although
without
numbers,
and
basically
this
shows
that,
when
chorus
reached
end
of
life
or
or
a
little
bit
before,
korres
reached
end
of
life,
a
lot
of
people
decided
to
migrate
to
flat
car,
and
this
has
kept
going
up.
F
So
it's
it's
nice
that
these
people
that
migrated
to
flat
car
didn't
then
migrate
away
from
it.
They
they
were
still
happy
with
the
results
and
yeah.
We
see
this.
This
constant
increase
in
adoption
as
time
passes
all
right,
so
plans
for
the
future.
We
are
working
on
publishing
a
public
roadmap
that
will
be
maintained
in
the
open,
a
lot
of
the
work
that
we
have
been
doing
in
the
past.
F
Few
months
has
been
updating
the
core
os,
the
last
core
os
version
to
newer
versions,
so
we
have
updates
to
the
kernel
to
systemd
and
a
lot
more
packages
coming
up.
So
in
the
current
beta
we
have
kernel,
5,
4
and
systemd,
245
and
and
then
once
all
of
that
is
unstable,
we
plan
to
keep
it
updating
that-
and
this
is
this
is
a
lot
of
work,
because
we
were
working
from
like
the
basis
of
coreos
and
and
chorus
kind
of
stopped
doing,
updates
for
a
long
time.
F
So
there's
a
lot
of
packages
to
update
there,
but
the
goal
is
to
basically
reach
a
point
where
everything
is
up
to
date
to
the
latest
versions
and
but
then
in
the
last
point
in
this
slide
the
lts
release.
Some
people
realize
that
they
actually
like
the
fact
they
they
liked.
The
fact
that
for
the
past
year
or
so
coreos
had
been
making
very
few
updates,
they
actually
like
an
os
that
changes
less
more
than
they
like
being
on
the
fleeting
edge.
F
So
we
are
working
on
releasing
an
lts
version
which
will
not
change
so
much
so
it
will
be
supported
for
18
months
and
then,
after
that,
people
can
migrate
to
the
next
version,
but
it
won't
be
changing
all
the
time
how
the
stable,
like
the
stable
version,
the
stable
version
will
keep
changing,
as
as
it
should
and
yeah.
So
that's,
basically
it
we
have
one
more
slide.
F
But
yeah,
basically,
we
are
proud
to
be
continuing.
The
legacy
of
korres
chorus
doesn't
exist,
but
the
spirit
lives
on
in
flat
car
and
we
also
the
locomotive.
Our
kubernetes
distribution
is
kind
of
like
the
the
legacy
of
tectonic,
which
was
the
the
chorus
coordinates,
distribution,
yeah
and
that's
it
for
the
presentation.
H
Yeah,
I
have
questions
so
not
100
percent,
familiar
or
super
familiar
with
with
how
core
was
managed,
but
do
you
plan
to
have
also
an
api
based
type
of
management,
or
do
you
have
that
already.
J
It
depends
on
how
you
look
at
it
right,
so
that's
hi
by
the
way
I'm
kiddo
one
of
the
directors
of
engineering
tinfoil
and
my
team
owns
flat
card,
so
not
quite
taking
the
direction
that
payloads
is
taking
right.
So
one
of
the
approaches
that
of
course,
was
that
the
operating
system
layer
to
your
cluster,
which
either
runs
containers
or
runs
combinators,
which
one
runs,
containers
it's
kind
of
just
a
piece
of
boring
infrastructure
like
you,
don't
want
to
handle
it
too
much.
J
You
want
to
have
itself
updating
and
maybe
make
a
noise
if
something's
wrong,
but
it
shouldn't
be
too
exciting
for
you
right
so
and
that's
the
that's
the
philosophy
that
chorus
drove
you
do
most
of
your
configuration
at
provisioning
time.
J
So
we
we
use
just
like
horus.
We
use
ignition
for
that.
There
is
a
certain
amount
of
compatibility
to
cloud
air
to
to
cloud
init
as
well,
so
you
can
do
either
or-
and
you
have
minimal
changes
at
at
reboot
time
and
that's
about
it.
Most
of
the
operating
system
remains
immutable.
So
if
you
want
to
change
anything,
you
re
redeploy.
J
That's
for
major
changes.
The
operating
system
doesn't
have
any
package
management
you're,
not
installing
software,
on
flat
card,
because
you
run
containers
right.
So
in
order
to
upgrade
you
just
you
know,
do
this
a
b
penetration
flipping
thing,
your
update
service
on
flatcar
pulse
the
update
server,
which
is
either
the
public
instance
that
we
host,
or
your
own
nebraska
instance,
that
you
host
and
writes
the
update
to
the
second
partition
and
then
either
does
the
reboot,
depending
on
its
configuration
or
signals
a
upper
level
that
it's
now
time
to
reboot.
H
Yeah
I
got
it
so
yeah
I
was,
I
was
kind
of
curious.
I
mean
some
of
the.
The
operating
assistants
are
basically
moving
towards
this
api
based
approach.
The
other
project
that
I
from
earlier
was
sort
of
familiar
is
model
rocket
and
then
also
doing
something
with
apis,
but
I
think
it
just
ultimately
is
what
the
philosophy
is
on
the
with
the
project
right
and
and
how
they
want
to
what
the
users,
what
they
used
to
really
want
right
and.
J
Profile
the
closest
thing
to
an
ipa
api
we
have
is
the
configuration
provisioning
time
other
than
that
we
just
don't
want
to
be
in
the
way.
You
would
basically
solve
everything
else
with
the
containers.
K
This
is
eric.
I
did
have
one
question
and
I'm
sorry
if
this
may
have
already
been
answered,
but
I
believe,
there's
still
fedora
chorus
and
I
remember
seeing
the
announcement
that
with
flag
car
was
going
to
be
a
seamless,
you
know
transition,
whereas
with
fedora
there
might
have
been
a
little
bit
more
of
a
guess
and
bump.
I
was
just
wondering
if
you
guys
could
have
shed
a
little
bit
more
light
to
see
what
the
difference
is
between
you
know:
fedora,
chorus
and
flat
car.
F
Sure
yeah,
actually
there
was
I
had
a
slide
about
this
and
then
I
removed
it
because
I
thought,
like
okay
chorus
now,
has
been
end
of
life
for
like
so
many
months.
The
talking
about
the
update
is
not
relevant
anymore,
but
I
guess
I
was
wrong.
So
updating
from
coreos
to
flatcar
is
a
very,
very
simple
thing.
You
just
change
the
server
that
you're
using
for
the
updates,
and
then
you
get
the
update
to
flat
card,
and
so
the
update
it's
just
handled
by
by
the
os.
F
So
you
basically
just
need
to
change
the
server
that
you
get
the
update
from
and
that's
it,
and
so
I
guess
the
main
difference
between
fedora
chorus
and
flat
car
is
that
we
are
a
drop-in
replacement
exactly
it
works
exactly
the
same
way:
chorus
worked,
whereas
in
fedora
coreos
they
went
a
different
way
and
you
basically
would
need
to
completely
redeploy
and
a
lot
of
things
have
changed
so
like
you
would
need
to
adapt
your
setup
to
run
on
fedora
cordos.
F
K
Gotcha,
thank
you
and
then
I
did
see
a
recent
announcement
about
evp
support.
I
think,
could
you
elaborate
a
little
bit
more
on
that.
F
Sure
we
we
use
ebpf
a
lot,
so
we
we
have
a
bunch
of
tools
that
we
developed,
and
so
we
ensure
that
things
work
like
it's
possible
and
easy
to
use
evpf
in
flat
car.
F
It's
not
like
very
special
changes
that
we're
doing
we're
just
basically
enabling
things
in
the
kernel
that
are
there
to
be
used,
and
we
are
just
providing
containers
container
images
with
bpf
tools
that
you
can
run
on
top
of
flatcar
so
that
you
can
use
ppf
on
flat
car.
And
so,
if
you
run
a
kubernetes
distribution
on
flat
car,
you
can
use
the
bpf
tools
that
we
wrote
or
the
cube
ctl
trace
that
other
people
wrote
directly
without
having
to
do
anything
special
because
it
just
works
out
of
the.
J
Box
so
there's
this
this
bpf
based
tool
that
we
do
our
suit
of
tools
that
we've
been
developing,
and
it's
actually
pretty
that
albon,
who
was
in
this
meeting
earlier
left
because
he's
the
director
driving
that
team
and
so
the
the
the
tool
set,
is
called
inspector
gadget
and
it
allows
you
on
the
kubernetes
level
to
gain
insight
into
what
your,
what
your
server
is
doing
and
it
uses
it
uses
ebpf
for
that.
J
So
flatcar
will
need
some
integration
work
and
some
testing
work
and
that's
what
margaret
was
mentioning-
and
this
is
mainly
about
making
sure
that
everything
works
from
this
from
scratch
and
seamlessly
and
users
can
just
go
and
use
tools
like
inspector
budget
gadget
or
the
bcc
tool
suit
that
is
around
and
without
jumping
through
any
for
any.
H
So
yeah,
I
think
my
last
question
is
the
same
that
I
that
I
asked
the
talos
team
is:
are
there
any
plans
for
making
this
a
cncf
project
or
applying
for
one
of
the
stages
in
the
cncf.
J
J
No,
no,
it's
a
very
good
question.
Just
it
deserves
a
very
good
answer.
So
we
didn't
fully
investigate
options,
yet
it
sure
is
a
an
interesting
direction
to
explore.
I
would
if,
if
he'd
be
present,
I
put
I'd
actually
forward
this
question
to
our
ceo
chris
cool
he's.
I
think
one
of
the
first
cncf
ambassadors,
so
there
is
very
strong
interest
in
working
with
the
cncf,
but
we
don't
have
any
concrete
plans
or
anything
to
announce
at
this
point
in
time.
H
H
Ones
twice:
well,
I
want
to
thank
the
talos
team
and
I
want
to
thank
the
flat
card
team
for
both
of
your
presentations
and
yeah
and,
and
we
also
have
a
sig,
runtime,
slack
channel
and
mailing
list
and
any
other
topics
that
you
want
to
discuss
or
any
questions
about
these
projects
or
any
other
projects
that
are
related
to
sick
runtime
feel
free
to
send
them
that
way
so
yeah.
Thank
you
very
much.