►
From YouTube: /lgtm with Containerd
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
Well,
thank
you.
Thank
you
for
everyone.
Tuning
in,
please
feel
free
to
use
the
chat
to
ask
us
questions
along
the
way
and
also
remember
this
is
a
cncf
event
and,
as
such
is
subject
to
the
code
of
conduct,
please
be
respectful
to
myself
to
phil
and
to
everyone
in
the
chat.
Thank
you
all
right.
Phil
can
you
do
us
a
favor
just
give
us
a
quick
introduction
about
you
and
then
we'll
talk
about
today's
plan
for
today,.
B
Sure
yeah,
so
you
know
I
guess
for
today's
topic.
Maybe
the
the
most
interesting
thing
is
just
how
much
I've
been
active
in
the
container
runtime
community
for
six
years
or
so
so
that
started
with
getting
involved
in
docker
in
the
early
days
of
the
docker
open
source
project
working
some
in
the
oci
and
run
c,
and
then
focused
a
lot
on
container
d,
the
last
few
years,
so
my
employer
is
aws.
B
You
know,
involved
in
this
part
of
the
community
for
a
long
time
and
yeah
excited
to
share
with
folks
how
to
kind
of
how
containery
is
is
put
together
as
a
project
and
how
you
get
involved.
So
yeah
thanks
for
having
me.
A
Gary
we're
really
excited
for
today's
episode
and
walking
through
container
day
and
how
can
how
to
contribute.
You
know
that
is
the
purpose
of
lgtm
is
to
give
anyone
watching
a
home
who's
thinking,
I'd
love
to
contribute,
and
I
don't
know
how
to
get
started.
But
that's
why
we're
here
and
for
container
d
there's
just
there
is
no
better
person
than
you
phil.
A
A
B
Yeah
so,
like
I
said,
container
d
has,
I
said
a
couple
years,
but
I
guess
officially
it's
been
around
for
for
longer.
It's
been
at
least
five
years
since
the
code
base
started,
but
I
think
kind
of
an
important
data
point.
There
is
that
container
d
had
a
shift
in
its
life
in
late
2016
and
early
2017,
from
just
being
sort
of
a
process
supervisor
that
was
used
by
docker
by
the
docker
engine
to
manage
the
life
cycle
of
run
c
and
so
again,
that's
the
open
container
initiative.
B
So
it
started
its
early
days
as
quite
a
small
project
that
just
sort
of
did
this.
This
intermediary
role
between
docker
the
broader
kind
of
runtime
engine
and
run
c.
Actually,
you
know
working
with
the
operating
system
to
create
your
containerized
process,
but
then
late
in
2016,
we
announced,
along
with
docker
and
other
other
people
in
the
community,
that
container
d
would
grow
into
a
more
complete
container
run
time.
B
For
example,
it
would
have
registry
interactions.
It
would
have
snapshotters
for
how
your
your
images
are
stored
on
the
local
file
system,
using
different
copy
and
write
file,
system
providers
and
so
container
d
in
that
kind
of
late
2016.
Early
2017
era
became
more
than
just
this
process
supervisor
and
really
became
a
container
runtime.
B
Many
new
contributors
showed
up
many
more.
You
know.
Cloud
providers
and
other
kind
of
downstream
users
adopted
it
or
started
that
process
of
shifting,
for
example,
the
managed
kubernetes
services
from
using
docker
to
container
d
and,
of
course,
that
took
some
time
to
migrate
and
for
container
d
to
mature,
but
effectively
you'll
find
container
d
used
pretty
heavily
in
cloud
managed
services
in
functions
as
a
service
offerings.
B
I
know
ibm
uses
it
in
some
of
their
functions
and
service
offerings.
Alex
ellis
has
fazz
d,
which
embeds
container
d,
darren
shepard
and
the
rancher
team
created
k3s,
which
embeds
container
d
and
so
there's
really
a
broad
set
of
kind
of
consumers
of
container
d.
As
kind
of
this
core
container
runtime,
you
know
some
of
those
use
cases
I
just
mentioned.
B
You
know.
One
of
the
benefits
is
it's
embeddable?
It's
it
has
a
nice
go
api
that
can
be
used
from
other
go
programs
either
as
a
client
or
even
embedding.
The
entire
server
like
k3s
does,
and
so
yeah
we've
seen
tons
of
growth
in
that
you
know
last
three
to
four
year
period,
both
in
usage
but
also
in
contributors
and
and
people
involved
in
the
project.
So
it's
it's
been
great.
It's
been
a
healthy
project.
We
graduated
in
the
cncf
a
couple
years
ago,
and
so
you.
B
Value
to
the
ecosystem,
good
governance
and
and
contribution
from
a
lot
of
different
parties.
Yeah,
it's
been
a
great
project.
A
Awesome
yeah,
I
think
the
I'm
always
surprised
when
people
say
they're,
not
using
container
d
and
then,
when
you
really
look
at
their
stack
of
what
they're
using
you
realize,
just
how
far
and
wide
container
data
is
and
actually
we're,
almost
all
using
container
d.
Now,
if
we're
running
containers
with
kubernetes
or
any
of
these
other,
you
know
serverless
style
tools,
so.
B
A
All
right
well,
today,
you're
going
to
kind
of
give
us
a
a
quick
code
walked
through
tour,
we're
going
to
take
a
look
at
the
different
components,
I'm
going
to
project
with
loads
of
questions
as
we
go,
and
then
we
will
take
a
look
at
the
development
experience
as
well.
So
I
will
bring
your
screen
up
here
and
if
you
want
to
just
take
it
away-
and
I
I'll
do
my
best
to
ask
questions.
B
B
Most
of
the
projects
within
container
d
are
what
we
call
core
projects,
so
the
same
list
of
maintainers
all
have
right
authority
to
all
those
subrepos,
and
many
of
them
are
vendored
into
the
container
d
project.
So
when
you
build
container
d,
there's
a
certain
release
or
or
hash
of
that,
subproject
that's
included
in
the
vendoring,
and
so
those
are
what
we
call
core
projects.
B
It
may
be
sort
of
an
area,
that's
aligned
with
container
d,
but
not
really
core
to
the
project
itself,
and
so
anyway,
there's
definitions
here
and
if
you
read
the
whole
governance,
you'll
see
you
know
the
slight
variations,
but
you
know
we'll
look
at
some
of
those,
but
I
think
it's
just
as
you're
looking
through
the
repos,
it's
good
to
to
kind
of
understand
the
difference
between
what
we
call
sort
of
the
core
container
d
project,
as
well
as
these
non-core
sub
projects.
B
In
fact,
you
know
just
here
near
the
top
of
our
list
of
repos
star
gz
snapshotter,
nerd,
ctl,
ttr,
pc
rust.
These
three
projects
are
non-core.
Sub-Projects
of
container
d
they've
been
brought
in
because
they're
interesting
to
the
project
as
a
whole
and
star
gtz
snapshotter,
for
example,
can
be
built
into
container
d
for
for
a
lazy
pole
container
image
implementation,
maybe
some
people
have
heard
of
nerd
ctl.
This
is
an
interesting
project
created
by
one
of
our
maintainers.
B
That
gives
you
a
more
docker-compatible
cli.
So
if
you
don't
like
the
limitations
of
ctr,
which
is
the
container
need
client,
you
can
try
out
nerd
ctl,
which
includes
all
kinds
of
interesting
things
like
it.
It
sets
up
rootless,
container
support
for
you.
It
builds
in
the
star,
gz
snapshotter
support.
B
B
But
of
course,
there
are
many
pieces
that
exist
around
that
console
support
c
groups
which
is
actually
used
by
other
projects
outside
of
container
d.
Like
our
c
groups,
implementation
we've
got
our
website
ttrpc,
which
is
the
lightweight
grpc
client.
When
we
look
at
the
architecture,
that's
how
the
shim
actually
talks
to
the
container
d
demon
itself,
managing
that
run
c
process
that
I
talked
about
and
again
there
are
many
others.
Here
we
built
some
release
tooling.
B
So
again,
that's
that's
kind
of
what
you'll
see
here.
There's
I
guess
24
total
repositories
again,
probably
17
18
of
those
are
core
of
the
project
and
five
to
ten
of
them
are
non-core
and
there's
also
tooling.
Some
projects
on
the
website.
A
B
Yeah
absolutely
and
nerd
ctl
is
interesting
because
you
know
it
has
a
bunch
of
contributors
who
maybe
aren't
all
that
interested
in
developing
a
container
runtime,
but
they
are.
They
find
it
easier
to
jump
in
with
hey.
I
could
implement
you
know,
docker,
inspect
or
or
docker
ps,
and
so
nerd
ctl
has
quickly
grown
into
a
very
active
project
where
a
lot
of
different
contributors
are
implementing.
You
know
other
pieces
of
the
sort
of
standard,
docker,
client,
syntax
for
container
d.
B
B
Yeah,
so
a
quick
look
at
the
at
a
rough
architecture.
Diagram
may
help
us
as
we
kind
of
look
at
the
container
d
core
main
repo.
B
So
as
I
talked
about
there's
all
these
consumers
kind
of
at
the
top,
whether
it's
a
cloud
or
a
specific
tool,
or
capability
and
they're,
probably
calling
into
container
d
via
various
methods,
and
so
the
client
options
are
really
to
use
the
go
api.
B
And
so
again
you
can
use
standard,
go
package,
documentation
for
container
d,
slash
container
d
and
see
all
the
the
the
go
apis
if
you're,
if
you're
coming
from
a
kubernetes
context,
you're,
obviously
kublet
will
be
calling
container
d
via
the
grpc
cri
api
and
then
plugins
within
container
d,
like
the
cri
plug-in
we'll
be
calling
the
go
api
to
drive
container
d
to
do
the
things
kubelet
is
asking
it
to
do
so.
B
Just
start
this
pod
or
you
know,
pull
this
image,
and
then
we
export
prometheus
metrics
as
well
out
of
the
engine,
so
that
next
level
the
core
is
really.
What
you
would
assume
is
is
the
implementation
of
the
container
runtime.
All
this
we've
broken
it
up
into
various
grpc
services.
B
You
know
for
images
and
namespaces
and
snapshots
and
tasks,
and
then,
of
course,
each
of
those
services
has
some
metadata
associated
with
that.
So
bolt
db
is
used
as
the
metadata
store
to
hold.
You
know
these
images
and
their
references
and
the
content
and
the
actual
containers
themselves
and
then
at
the
back
end
it's
you
have
all
the
various
snapshots
implementations,
so
butter,
rfs,
developer
overlay
and
again
you
could
that's
plugable.
B
You
can
even
have
external
snapshotters
like
the
star
gz
snapshotter
project
and
how
all
this
actually
talks
to
to
runtimes
or
via
the
shim
client,
which
is
talking
to
the
other
side
of
that
shim
is,
is
running,
run
c
and
actually
your
containerized
process
behind
that.
But
again
that's
pluggable
as
well.
You
can
write
your
own
shim
for
something
other
than
run
c,
and
so
we
have
shims
for
firecracker
for
lightweight
virtualization,
cata
containers.
Again
lightweight
virtualization,
microsoft
has
run
hcs,
which
runs
the
windows
containers.
B
B
So
if
we
look
at
the
main
containerity
repo,
again
lots
and
lots
of
content
here,
there
will
be
a
quiz.
B
These
are
all
if
we,
if
we
go
look
at
the
actually,
we
should
just
open
that,
so
it
may
help
help.
People
see
that
more.
Clearly,
if
I
open
up
the
godoc,
then
we'll
see
a
lot
of
these
these
same
things.
We
have
the
container
d
client.
So
again,
all
the
ways
you
can
use
the
client
to
talk
to
a
running
container
d
demon,
various
options
for
that
there
are
obviously
packages
for
each
of
the
services,
so
the
grpc
services
images
all
the
options
for
when
you're
starting
a
container.
B
So
again,
if
you're,
using
the
go
api,
you're
going
to
say
with
image
and
reference
an
image
and
then
there's
options
on
how
you
want
that
pulled
various
snapshot
options.
If
you
want
to
write
your
own
oci
run
c-spec
with
your
own
options
in
there,
then
you
can
actually
pass
this
back.
So
again,
the
go
api
is
fairly
rich.
This
is
actually
how
the
cri
implementation
in
container
d
uses
container
d,
so
it
actually
uses
the
go
api
to
drive
container
d
like
creating
a
new
container
or
creating
a
new
task.
B
So
that's
that's
mostly
the
files
in
this
root
directory
here
or
a
lot
of
the
implementation,
the
go
api
and
then,
at
one
level
down
there's
a
lot
of
the
the
metadata
implementation.
You
know
references
and
the
namespaces
and
metadata
service
and
labels
and
leases.
B
What
else
is
interesting
under
the
package
directory?
So
one
of
the
interesting
things
is
that
this
year
we
changed
cri
from
being
a
totally
different
subrepo
within
the
container,
the
org,
and
we
migrated
and
merged
that
into
the
container
d
code
base
itself.
B
We
were
doing
a
lot
of
kind
of
this
iterative
vendoring,
so
you
fix
something
in
cri
and
then
you
fix,
you
know
what
it's
using
in
container
d
and
then
you
have
to
re-vend
your
cri
back
into
container
d
to
make
a
build
and
then
release
it,
and
so
we
hope
this
you
know,
helps
people
develop
and
use
the
cri
if
you're
a
developer,
to
enable
kind
of
quicker
iteration
on
changes
in
the
cri.
And
so
you
can
see
the
cri
subber
here.
B
This
is
most
of
the
implementation
of
that
cri
api
from
kubelet
and
again
you
know
if
we
look
in
server,
here's
a
container,
create
and
so
again,
if,
if
you're,
using
container
d
as
your
kublet's
runtime,
the
cri
call
to
create
a
container
will
come
through
here
and
then
you
know.
If
we
look
at
this,
it's
actually
using
container
d's
api
to
do
that
container,
create.
And
so
it's
that
linkage
between
cri
and
container
d
being
used
as
a
via
the
go
api.
B
So
that's
that's
kind
of
a
fairly
high
level
overview
of
the
layout
of
the
code,
trying
to
think
if
there's
anything
else
worth
digging
into.
But
I
I
again
it's
it's
a
big
project.
There
is
there.
A
B
Quite
a
bit
of
code
here,
but
it's
it's.
You
know
most
people
find
that
you're
not
making
a
change
that
cr
that
crosses
you
know
this
entire
repo.
B
You
know
I
personally
I'm
not
necessarily
an
expert
on
our
snapshotters.
There
are
other
people
who
are,
and
so
I,
if
you
looked
in
the
snapshotters
projects
and
directories,
you'd
find
very
few
changes.
For
me.
I've
been
focused
more
on
other
parts
of
the
engine,
so
you
know
that's
totally
fine
as
well.
Contributors
can
have
a
focus
area,
an
area
where
they
feel
more
comfortable,
and
you
know
we
have
plenty
of
contributors
that
cover
the
code
base.
C
A
Information
there,
but
you
know,
if
you're
coming
to
the
project-
and
you
want
to
make
a
change
to
the
api.
The
first
place
to
start
would
either
be
the
the
protobuf
files
which
have
the
descriptions
or
those
go
files
in
the
top
level
directory
directory
which
map
to
the
api
and
then
you've
got
a
nice
clean
directory
structure
with
sub
directories
for
all
the
different
components
that
those
apis
have
to
interact
with.
B
B
Yeah
yeah,
so
just
like
any
other
project,
you
know
your.
Your
starting
point
is
to
clone
the
container
repo
get
it
set
up
in
some
local
environment.
I
guess
it's,
it's
probably
good
to
mention.
We
have
a.
B
Building
dot
md
entry
here
just
talking
about
building
you
know
the
dev
environment
they're,
actually,
today
other
than
installing
go
and
potentially
installing
the
butterfs,
headers
and
library
for
your
linux,
distro,
there's
really
very
few
kind
of
prereqs
that
that
would
be
very
difficult.
In
fact,
this
whole
section
on
installing
the
buff
compiler.
B
If
you're
never
going
to
change
the
api,
you
actually
don't.
Even
you
know,
you
don't
need
the
the
protobuf
compiler
installed
so
yeah,
the
the
other
part
of
it
is
that
if
you
don't
have
run
c
installed
on
your
system,
which
again
is
almost
hard
to
to
do
today,
because
most
distros
will
install
some
container
runtime
components
that
will
install
probably
a
reasonable
version
of
run
c.
B
But
were
that
not
the
case,
you
would
want
to
clone
the
run
c
repository
and
again
run
these
fairly
straightforward
commands
to
install
run
c
and
a
little
shout
out
that
run
c
used
to
we
used
to
care
a
lot
about
which
version
of
run
c
you
installed,
we,
you
could
look
in
our
our
vendoring
go.mod
file
and
find
the
right
release
tag
and
build
that,
but
run
c
is
currently
voting
on
the
v.
1.0.0
release
so
run
c
is
finally
going
to
be
1.0
final,
and
so
you
know
any
kind
of
reasonable.
B
1.0
install
of
run
c
should
work
fine
with
container
d.
There's,
there's
less
of
this
kind
of,
inter
relationship
between
versions
of
run,
c
and
versions
of
container
d
that
you
have
to
worry
about
anymore.
B
But
again,
if,
if
you're
in
the
exact
version
we
build,
we
actually
have
created
a
a
new
file
which.
B
But
again
what
we
tried
to
do
was
separate
out
vendoring,
from
which
version
we
build
for
ci,
because
it
again
those
things
don't
absolutely
have
to
be
linked
anymore.
But
if
you
do
look
at
our
go
bond
and
again
we
use
go
mod
vendoring.
We
finally
went
through
the
pain
of
switching
to
gomod
and
getting
all
our
vendoring.
We,
we
do
have
a
little
bit
of
a
complex,
replace
rule
set
up
here.
B
So
if
you're
going
to
vendor
container
d,
you
need
to
also
do
these
same
replacements,
and
there
are
some
tricks
here
with
empty
mod,
which
you
can
go.
Read
the
the
pr
about
that
people
much
much
more
skilled
in
the
art
of
of
go
mod,
set
that
up
but
again
run
c.
Here
you
can
see
we're
using
1.0.0.rc95.
B
A
B
It
or
not,
container
d
builds
on
mac
os
like
natively,
not
as
a
container,
and
there
are
people
working
on
there's
a
couple
open
issues
and
I
think
even
a
pr
about
using
some
bsd
kernel
capabilities
to
actually
run
containers.
So
you
can't
run
containers
on
mac,
but
you
can
build
the
project
and
we
even
run
ci
our
our
every
pr
ci
make
sure
the
build
isn't
broken
and
even
I
want
to
say
it
has
a
short
test
suite.
But
I'm
not
sure
if
I'm
right
about
that,
we
can
go.
B
Look
so
yeah,
but
mostly
you're,
going
to
want
to
be
on
linux,
but
hey.
If
you
want
to
build
on
that,
you
can
do
that.
A
B
A
B
B
So
you
can
actually
use
nerd
ctl
as
the
client
on
mac
driving
the
the
linux
embedded
vm,
similar
to
how
docker
desktop
works.
So
if
you're
interested
in
that
that's
an
interesting
new
project
to
play
with
as
well
but
back
to
linux,
we're
here
at
our
command
line,
I've
checked
out
the
project.
I
have
run
c.
I
have
the
you
know,
all
the
necessary
prerequisites,
probably
the
most
interest.
B
You
know
easy
thing
to
do
as
far
as
building
is
make
binaries
that's
going
to
build
ctr,
which
is
again
the
simple
client
which
you
know.
If
you
read
through
our
readme,
we
say
is
unsupported.
We
mean
that
in
the
sense
that,
like
ctr
isn't
part
of
that
api
contract
that
we
offer
in
container
d,
it's
simply
a
a
sort
of
nice
admin
type
tool.
B
B
We
build
a
stress
tool,
that's
just
an
interesting
use
case
for
for
trying
to
test
container
d.
B
You
know
24
7,
just
running
containers,
tasks,
image,
poles
and
one
of
the
maintainers
has
a
has
a
live
system
constantly
running
that
on
every
commit,
and
then
there
are
three
shims
and
I
I
won't
spend
a
ton
of
time
here,
but
the
shim
api
has
has
matured
over
the
the
four
or
five
years
that
the
project's
been
around
and
so
we're
now
on
the
the
v2
version
of
the
shim,
which
is
actually
the
second
to
run
cv1
and
run
cv2.
B
B
Docker
also
uses
container
d.
I
think
most
people
know
that,
but
I
didn't
say
that
in
the
opening
docker
ins
has
a
in
the
old
packaging,
docker
actually
delivered
container
d.
Now,
most
ubuntu
distributions
have
their
own
container
d
package
and
docker
has
its
own
package
and
simply
depends
on
the
container
d
service
running
most
likely
through
system
d
on
your
machine.
B
So
because
I
have
docker
running
and
it's
already
using
the
container
d
installed,
I
I
usually
play
tricks
in
my
environment
to
either
shut
down,
docker,
replace
container
d
or
point
to
container
d
in
slash
user.
Local
there's
also
things
you
can
do
like
start
container
d
listing
on
a
different
unix
socket,
and
so
then
you
know
I
could
basically
run
this
container
d
even
while
docker
is
is
depending
on
my
system
container
d.
B
B
To
it
defaults
to
a
container
dot
sock
in
this
roo
owned
directory,
so
this
is
the
my
system,
if
you
want
to
call
it
that
my
system,
level
container
d,
is
running
and
listing
on
this
socket.
D
B
Basically
create
another
config,
obviously,
just
like
I
did
with
a
different
socket
and
I
would
need
to
set
up
the
grpc
address
to
be
different
than
the
default.
So
again,
one
one
way
if
I'm
happy
just
running
the
the
tests,
I
think
I
actually.
B
B
B
Yeah,
so
the
the
nice
thing
is
the
the
test.
Suite
does
this
for
you,
so
the
integration
part
of
the
test
suite
will
start
its
own
container
d
on
its
own
socket
with
its
own
config,
and
so
the
the
nice
thing
is.
If
I,
if
I
run
make
test,
you
know,
I'm
not
going
to
have
a
problem
having
this
weird
interaction
with
the
system,
level,
container
d
and
so
again
that
this
will
probably
take
a
good
while
I
we
can
leave
it
running
for
a
minute.
B
I
think
the
it
may
be
well
I'll
I'll.
Give
you
a
chance
to
tell
me
what
you'd
like
to
to
poke
at
next,
but
it
may
also
be
worth.
B
Noting
that
our
ci
is
set
up
to
use
github
actions,
and
so
you
know
for
for
every
actually
that's
that's
painful
to
look
at
the
ammo.
We
should
just
look
at
a
pr
and
we'll
look
at
one
of
my
pr's
and.
B
So
this
this
kind
of
gives
you
a
feel
for
what's
going
to
happen
and
again,
maybe
maybe
I'm
jumping
ahead
if
we're
going
to
do
an
issue
in
a
pull
request.
We'll
get
back
to
this,
but,
as
I
mentioned,
we're
cross-building
to
make
sure
build,
is
working
across
a
number
of
architectures,
including
my
colleague,
sam
karp,
just
added
freebsd
support
recently
and
that's
still
maturing,
and
it
has
a
run
c.
B
A
like
replacement
called
run
j,
that's
its
own,
separate
project.
So
again,
we're
crossfit
we're
linting
across
various
arc,
os's
we're
cross
building
across
various
cpu
and
architecture,
pairs,
we're
then
building
all
the
binaries
and
then
running
integration,
and
I
I
was
right.
We
do
run
the
unit
tests
on
mac
linux
integration
with
a
bunch
of
matrix
of
different
of
those
shims
and
also
running
against
c-run,
which
is
a
c
a
replacement
for
run
c,
written
in
c
that
red
hat
created.
B
So
again.
This
is
this
is
kind
of
what
happens
when
you
create
a
pr.
You
know
all
these
steps
are
going
to
happen
in
github
actions
to
validate
that
your
change
is
a
breaking
a
set
of
architectures.
A
set
of
operating
systems
and
then
running
the
tests.
A
Okay,
so
let
me
clarify
a
few
things
there,
so
you
know
the
feedback
loop
for
a
new
contributor
come
into
the
project.
You
know
they
clone
the
project
like
they
do
with
any
other
git
repository
a
pretty
solid
way
to
start,
I
guess,
would
just
be
running,
make
real
binaries,
which
will
just
ensure
that
you
have
the
correct
tool
chain
and
everything
that
you
need
to
actually
build
the
project.
A
A
Yeah,
you
know
it's
become
such
an
easy
question
these
days.
When
you
ask
someone
how
to
build
a
go
project,
you
know.
If
you
go
back
just
18
months,
I
was
like
well
which
vendor
ain't
taller,
which
depends
they.
A
B
B
Yeah,
I
think
we've
said
113,
I
I
don't
know
if
our
development
main
branch,
so
we
just
released
1.5
recently,
which
I
know
I
guess
what
I'm
why
I'm
hesitating
is
I'm
not
sure
if
there's
anything
in
the
current
development
branch
that
is
using
so,
for
example,
there
golang
has
package
errors
and
we
started
using
errors.
Dot
is,
I
think,
which
is
I
don't
remember,
which
go
release
that
came
in,
but
it
might
be
higher
than
113
now,
but
that's
a
great
first
pr
for
someone.
B
A
B
Yeah
so
so
make
integration
is
what's
going
to
actually
start
a
container
d
instance,
so.
B
Yeah,
so
this
needs
run
c
to
be
installed
because,
obviously
it's
going
to
be
starting
and
stopping
containers
it
needs
it
needs
root,
because
that's
to
start
the
container
d,
daemon
and
you're
going
to
need
an
internet
connection
and
hope.
The
container
registries
we
use
during
integration
are
not
having
down
time
or
an
outage
because
it'll
be
pulling
images.
A
I
think
I
guess
you
know
if
you're
making
your
first
contribution
to
the
project
you
know
you'll
know
which
parts
of
the
system
you're
hopefully
trying
to
change
unit
tests
may
be
enough
to
get
that
pr
up
and
if
you're
doing
anything
that
modifies,
maybe
container
creation.
The
integration
tests
are
probably
quite
a
good
thing
to
run
as
well.
A
Okay,
so
maybe
we
could
take
a
look
at
the
pull
request
format.
I
don't
know
if
you
have
at
a
trivial
issue.
You'd
like
us
to
work
on
or
you
want
to
just
run
through
a
pill
request,
it's
up
to
you,
but
if
we
could
just
maybe
take
a
look
at
the
template
and
talk
about
some
of
the
conventions
that
container
d
project
uses
there
too.
B
B
That's
just
like
a
general
question
and
we
also
added
a
link
to
try
and
get
people
to
join
cncf,
slack
and
point
out
that
the
container
d
and
container
d
dev
channels
exist
there
for
people's
questions
and
a
little
more
interactive
way
to
to
talk
to
community
members.
B
The
only
distinction
there
in
the
channel
names
container
d
we
see
is
like
anybody,
end
users
you're
playing
around
you're,
trying
it
out
container
d
dev.
We
we,
we
assume,
that's
someone
who's
interested
in
maybe
contributing
or
has
a
question
about
how
it's
built
or
you
know,
is
trying
to
extend
it
in
some
way
or
use
it
in
their
project.
D
B
You
know
we
were
fine
if
people
mix
that
up
from
time
to
time,
but
that's
kind
of
the
the
split
of
the
channels.
B
We
also
formalized
our
security
policy,
so
I'd
shown
some
of
that,
and
so
this
is
kind
of
nice
because
we
can
link
directly
to
that,
and
so
what's
left
is
again
just
straightforward
templates,
for
I
found
a
bug
which
again
looks
like
a
lot
of
other
templates
out
there.
What
would
you
do?
What
results?
What
did
you
expect
and
then
we
asked
you
know
for
some
output
of
version.
B
You
know
show
us
if
it's
relevant
your
runc
version,
your
cri
configuration
what
kernel
you're
on
and
then
we
we've
we've
tried
to
toss
in
some
helpful.
You
know,
wait,
you
know.
If
continuity
is
hung,
can
you
provide
us
a
stack
trace
by
you
know
following
these
commands,
so
yeah
fairly
straightforward
people
that
follow
this
get
a
lot
more
help,
because
if
they
don't
do
this,
you
know
our
first
response
usually
is:
can
you
provide
you
know,
version
details,
etc?
B
A
And
I
think
we
can
see
go
ahead
with
you.
I
was
just
curious.
You
know
if
I'm
coming
to
the
project
and
I've
got
a
great
idea
for
like
a
new
feature,
you
know
is
opening
an
issue
to
start
a
discussion.
The
best
way
is
having
a
pull
request
with
a
proof
of
concept.
The
best
way
is
there
an
rsc
process
like
I
guess,
for
smaller
changes,
that's
not
important,
but
maybe
larger
ones.
B
Yeah
so
yeah,
so
we
we
have
not
ever
felt
the
need
for
like
a
full
kind
of
formal
proposal
process.
You
know
for
new
ideas
or
new
features.
Obviously
it
can
be
really
helpful
if,
if
it's
not
a
minor
thing
to
you
know,
just
join
one
of
those
channels,
the
container
dev.
B
But
opening
an
issue
is
is,
is
definitely
a
reasonable
alternative
or
next
step,
for
example,
kaz
one
of
our
reviewers
just
added
this
a.
C
B
Days
ago,
that's
you
know,
should
we
add
health
checks
like
docker
hat
like
docker
engine
has?
How
would
we
do
that?
What
are
the
pros
and
cons?
That's
something
he
had
already
chatted
with
with
the
maintainers.
Should
you
know
do?
Do
you
all
think
that
this
is
something
worth
considering
and
we're
like
yeah?
You
know
open
it
open
a
feature
label
on
that
again,
the
the
template
provides
that
labeling
and
then
we
can
potentially
add
more
labels
like
windows
or
other
interesting
labels
on
the
issues.
A
B
Yeah,
so
we
do
we
we've
done,
maybe
a
poor
job
than
we'd
like
to
on
you
know,
marking
like
experienced
beginner
expert
intermediate
health
wanted.
These
were
labels
we
created
to
try.
You
know,
especially
in
the
early
days,
trying
to
help
people
understand
where
they
could
that
fit
in.
B
C
A
Issues
so
it
just
sounds
like
someone
new
is
coming
then,
and
they
can't
navigate
the
issues
and
find
something
that's
relatively
simple
for
them
to
pick
up,
the
community
calls
might
be
a
good
way
to
start
discussion
or
the
container
d
dash
dev
channel
on
the
cncs
slack
and
just
say,
hey.
I
want
to
contribute,
I
don't
know
how
to
start,
and
hopefully
someone
there
will
help
you
out.
A
B
Yeah,
so
you
know
my
my
workflow
is
that
I
tend
not
to
really
use
this
new
pull
request.
I
mean,
obviously
you
can
use
this
and
select
a
branch
and
compare
changes,
but
in
my
my
workflow
you
know
I
have
something
I
want
to
do.
B
B
I
think
that
would
yeah.
I
think
that
was
when
there's
that
issue
with
signaling
sig
sig
int
interrupt
and
rights
and
reads
potentially
needing
to
handle
interrupt
anyway.
B
So
let's
say
we
found
out
that
that
we
needed
115,
and
now
I
guess
we
can
leave
so
you
know
I've
made
this
change,
I'm
going
to
commit
it
update
minimum.
B
Or
so
the
we
do
have
project
checks
that
run
early
in
those
github
actions.
The
format
we
want
is
the
standard
validation
that
a
lot
of
other
projects
use
so
docker
uses.
This
run
c
uses
it,
and
so
it
expects
like
a
sort
of
title.
I
guess
we'll
call
it
up
to
75
characters,
then
a
blank
line.
B
I
realized.
We
then
some
kind
of
description
we
now
need
go,
went
up,
15,
dot,
x
ability,
then
you
need
a
signed
off
by
and
what
we
would
like
you
to
do
is
use
your
real
name,
and
so
some
people,
you
know,
put
their
github
id
here,
but
again
for
the
dco
compliance
we'd,
like
people
to
use
their
their
real
name
and
then
a
obviously
their
email,
and
so
this
is.
This
is
the
the
format
that
we
expect
and
then
we'll
fail.
Ci.
B
And
so
I
I
tend
to
just
you
know
again
push
the
this
has
been
pushed
to
my
fork
of
container
d,
and
so
now,
if
I
just
go
to
the
container
d
github,
has
this
nice
feature
where
it's
like?
Oh
you
just
push
something.
Do
you
want
a
pull
request?
B
Yes,
I
do,
and
so
the
nice
thing
is
you'll
see
that
you
know
it's
using
that
that
sort
of
title
line
of
my
commit
as
the
title
of
the
pr
and
then
everything
else
is
just
put
in
inside
the
first
comment
and
again
there's
the
diff
of
my
change
and
obviously
I
can
create
that.
B
I
guess
I'm
not
going
to,
because
I
have
no
idea
if
that's
really
true,
but
what's
going
to
happen,
is
that
automatically
a
few
things
will
happen
first,
sadly,
because
of
crypto
mining.
If
you've
never
contributed
to
container
d
or
any
of
these
repos,
it
will
not
run
ci
until
one
of
us
with
commit
access
to
the
repo
clicks,
a
button
that
says
authorize,
you
know
run
running
of
ci.
B
B
The
forget
the
the
end
to
end
tests
for
container
d
and
kublet
we're
also
running
arm
64.,
because
github
actions
doesn't
have
integrated
support
for
arm64,
yet
we
open
lab,
runs
arm,
64
builds
and
tests
and
integration.
B
Open
lab
was
having
an
issue
when
I
opened
this
pr,
so
it
didn't
run,
but
though
those
things
are
going
to
happen
and
again,
if
you're
a
new
contributor,
it's
going
to
force
one
of
the
continuity
members
to
authorize
the
end-to-end
test
to
run-
and
there
are
a
few
slash
commands
that
the
robot
will
comment
and
show
you.
B
B
There
aren't
a
ton
of
like
slash,
commands
or
robots
operating
here
other
than
this
one
that
runs
the
end
of
tests,
and
so
we
don't
have
auto
merge
or
any
of
those
other
things.
So
at
that
point,
most
of
what
you're
going
to
want
to
care
about
is
that
you
haven't
broken
ci
and
if
you
don't
understand
why
something
failed,
just
comment
and
ask
say
hey:
this
doesn't
look
related
to
my
pr.
B
Can
someone
help
me
figure
out
why
this
didn't
work
and
then
yeah
you're,
looking
for
two
different
maintainers
or
reviewers
to
lgtm
your
pr
and
then
it
will
be
merged
and
they
may
we
may
add
labels
like
hey.
This
is
a
really
important
bug
fix.
It
should
be
cherry,
picked
back
to
existing
release
and
so
maintainers
will
add
those
labels
and
they
ask
you
and
even
give
you
the
commands
like.
Can
you
please
cherry
pick
this
commit
against
the
release
branch,
one
four
or
one,
five,
nice
awesome.
B
That's
pretty
much
yeah,
that's
perfect!.
A
A
Very
much
all
right-
let's
go
back
over
here,
so
that
is
our
whirlwind
guy
to
contribute
into
container
d.
I
hope
you
all
got
a
lot
of
useful
information
there
and
I
think
they're
really
important,
but
to
get
to,
I
think,
they're
really
important
to
take
home
there
are
that
if
you
do
want
to
contribute
to
container
d,
be
involved
on
the
cncf
slack
and
the
container
d
and
container
dash
dev
channels,
I'm
sure
there'll
be
lots
of
interesting
and
helpful
people
there
willing
to
help
you
out
the
suq.
A
Is
there
get
involved
open
issues
wherever
possible?
If
there's
anything
bigger,
maybe
open
a
discussion
first
and
try
and
get
some
people
to
discuss
the
idea,
but
if
you
make
sure
it
makes
sense
for
the
project
and
take
it
forward
from
there
as
far
as
building
and
testing
goes,
there's
make
fail
targets
for
everything,
so
it
should
be,
hopefully
nice
and
simple,
and
that's
it
have
fun
contributing
to
container
d
and
phil.
A
A
Awesome,
thank
you
very
much
all
right.
What
time
is
this
in
one
hour
pop
will
be
here
with
spotlight
and
they
will
be
doing
a
six-store
root
key
ceremony
so
come
and
check
that
one
out.
Thank
you
again.
I
will
speak
to
you
soon
and
have
a
great
day
thanks
bye.