►
From YouTube: Kubernetes SIG Arch - KEP Reading Club 20220207
Description
KEPs discussed:
- kubectl exit code standardization:
https://github.com/kubernetes/enhancements/tree/master/keps/sig-cli/2551-return-code-normalization
- Removing dockershim from kubelet: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim
A
Okay,
hi
welcome
to
the
session
of
the
cap
reading
club.
We
have
a
few.
We
have
two
caps
today.
One
is
the
docker
shim
removal
cap
and
the
second
is
cube.
Cuddle
return
code
normalization.
A
We
have
the
authors
of
both
those
kept
on
call
today.
So
thank
you
so
much
for
joining
again.
Just
as
a
quick
reminder.
This
call,
as
with
all
community
calls
follows
the
cncf
and
the
community's
code
of
conduct,
which
boils
down
to
be
excellent
to
each
other,
and
this
meeting
is
being
recorded.
So
please
don't
do
or
say
anything
that
you
don't
want
posted
online.
A
That
being
said,
we
can
start
off
the
first
step
that
we
will
be
looking
at.
Is
the
exit
code
normalization
or
the
return
code
normalization
gap?
A
So
I'm
going
to
start
a
timer
for
around
10
minutes
and
let's
take
that
time.
Read
the
cap
understand
what
it's
about
and
subsequently
discuss
and
ask
questions
if
any,
so
we
can
always
extend
the
time
by
a
few
minutes
as
well
if
needed.
Please
just
let
me
know
if
you
need
a
little
more
time
to
finish.
C
A
Okay,
so
I
am
going
to
bring
up
the
timer.
A
Okay,
I'm
gonna
assume
that
a
no
so
does
anyone
have
questions
to
start
off
with.
A
Okay,
I
have
a
few
questions,
so
just
like
a
clarification,
the
feature
is
gated
by
an
environment
variable
as
well
as
a
feature
gateway.
It's
not
just
gated
by
the
environment.
Variable.
B
No,
no
so
so
the
feature
is
just
gated
by
an
environment.
Variable,
okay,
as
we
are
working
on
on
the
client
side,
having
a
feature
gate
on
on
cube,
ctl
would
be
harder
to
make
right
so
usually
in
cube.
Ctl
think
they
are
gated
by
environment
variables.
D
A
Another
question
that
I
had
was:
I:
I
see
that
in
the
different
approaches
that
I
mentioned,
such
as
creating
error,
passive
functions
or
modifying
the
current
ones
or
like
a
hybrid
one,
when
you
say
for
the
execution
step,
do
you
mean
errors
that
are
returned
after
errors
that
are
returned
as
a
response
from
the
api
server?
Is
that
what
it
means
by
execution
step.
B
Yeah,
so
just
just
to
clarify
how
how
cub
ctl
works
today
right
so
usually
we
have
three
steps
and-
and
six
li
is
also
working
to
change
that,
but
usually
we
have
the
complete
one,
the
validation
one
and
then
the
the
run
right.
So
the
complete
one
is:
is
that
one
that
fills
the
structures
that
need
something
with
the
default
values?
B
If
you
try
as
an
example,
let's
say
you
need
to
pass
something
like
a
name
and
the
name,
and
you
don't
pass
the
name,
so
it
defaults
the
structure.
The
structure
is
used
with
default
values
right
the
validation,
the
validation
side.
So
in
this,
in
this
case,
in
the
complete
side
you
don't
have
as
it
hold,
and
even
this
is
is
being
refactored
but
anyway.
So
the
validation
step
is
that
one
that
thinks
all
of
the
the
arguments
that
that
you
pass
and
checks.
B
D
B
Far,
you
didn't
went
to
the
api
server
right
and
then
you
have
the
run
step
that
may
have
the
dry
run
or
may
not
have
the
dry
run
or
may
run
only
on
on
the
client
side.
If
you
are
using,
for
example,
a
cube
ctrl
plugin
that
doesn't
go
to
the
let's
say
to
the
api,
server,
etc
right.
So
here
we
are
splitting
those
on
buckets
the
the
first
one,
the
validation
one
is
like
hey.
B
You
have
an
exit
call
because
you,
you
used
an
invalid
argument
and
then
you
have
a
run
one
that
may
be
returned
by
still
before
going
to
the
api
server
or
after
going
to
the
api
api
server
right
and
then
you,
you
may
have
also
some
some
steps
when
you,
when
you
are
calling
like
an
external
program
like
diff
when
you
do
a
ld
for
when
you
or
when
you
do
a
a
cube,
ctrl
plugin,
or
something
like
that,
that
you
need
to
provide
the
user
with
information
that
that
error
didn't
came
from
cube,
ctl
or
from
api
server,
but
from
a
program
that
was
colored
from
from
from
music.
A
Yeah
got
it
got
it,
so
you
mentioned
the
xml
command
the
error
from
the
external
command
right.
So,
if
I
understand
correctly,
the
behavior
and
implementation
of
that
is
yet
to
be
discussed
right.
How
that
will
be
normalized?
Okay
got
it.
B
Yeah
correct,
so
this
one
is
tackled
by
ross.
I
had
to
step
down
from
the
cap
too
many
things
to
do,
but
but
yeah
this
is
still
to
be
discussed
between
6li
ross
me
right
now
that
the
cap
is
on
in
alpha.
So
this
can
be
changing
right.
The
idea.
D
A
A
Is
there
a
tracking
issue
for
this
on
kk
with
like
work
that
needs
to
be
done,
or
in
case
you
need
help
right?
Where
can
folks
maybe
follow
along
slash,
help
out,
if
at
all
anything,
so
is
there
some
place
like
that.
B
Yeah,
that's
a
great
question
because
some
folks
they
reached
me
about
that,
I
would
say
that
right
now,
it's
not
now
that
the
cab-
to
be
honest,
I
don't
know,
what's
the
right
process
after
a
cap
gets
measured.
Usually
when
I
have
a
cap
measured
by
my
own,
I
start
implementing
that
thing
right
and
I
don't
know
if
this
is
the
right
thing,
because
sometimes
it's
easier
to
so
yeah
jim's
asking
if
we
do
have
an
umbrella
issue.
B
We
have
the
the
issue
on
the
enhancements
rip
ripple,
but
we
don't
have
the
umbrella
issue
on
kk
repo.
What
we
do
have
are
sparse
issues
that
generated
that
care
right,
so
people
complaining
about
the
exit
code
in
some
places
and
that
it
generated
that
cap,
but
that
that's
that's
a
great
question
and
I
should
discuss
that
with
with
6cli.
A
Okay
yeah,
that
sounds
that
sounds
good.
This
was
interesting.
I
will
for
sure
be
following
along
this
skip,
because.
B
I
was
just
going
to
say
that
if
someone
we
still
have
time,
but
if
someone
has
some
questions
or
wanna
wanna
jump
in
into
this
feel
free
to
reach
me
on
on
slack,
I
am
responsive,
usually,
and
I
can
yeah
go
ahead.
D
Yeah
quick
question,
so
you
were
talking
about
those
three
steps
right,
like
complete
validation
and
run.
So
is
there
some
diagrams
or
something
like
that
anywhere
on
how
the
structure
of
the
cube
ctl
commands
are
implemented?.
B
D
B
B
D
Right,
I
I
got
my
start
on
the
6cli
side
too,
and
it's
like
you
can
compile
the
cube
ctl
really
easily
compared
to
any
of
the
server
side
stuff.
So
all
you
need
is
a
server
to
talk
to
and
you
can
make
all
sorts
of
changes
on
the
cli
side.
So
it's
definitely
very
approachable.
D
You
know
on
windows,
mac
os
as
well
as
in
linux,
so
definitely
recommend
people
looking
at
the
cube
ctl
as
a
starting
point
to
see
if
there
are
changes
that
need
to
be
done,
and
you
can
pick
up
something.
B
Yeah
so
see
so
I
guess
I'd
have
the
first
kk
issue,
yeah
cool,
I
I
usually
I
try
because
yeah
cube
ctrl
is,
is
really
an
amazing
place
to
start,
and
it's
not
that
you
need
to
understand
like
how
ip
tables
and
ipvs
and
everything
works
on
cube
proxy
or
like
api
machinery.
So
yeah,
that's
cool
I've
sent
to
you
folks
cool,
so
we
have
a
conventions.
Amazing.
Thank
you
and
I've
sent
to
you
folks,
also
an
issue
about
the
refactor
that
it's
gonna
be
in
place.
B
It's,
I
guess
right
now,
it's
a
proof
of
concept
as
far
as
I
remember,
but
the
idea
is
to
to
make
some
split
between
the
flags
and
the
options
and
maybe
not
having
the
complete
anymore.
So
I
would
like
to
invite
you
if,
if
you
have
also,
if
you
want
to
jump
in
into
that-
and
I
would
say
that
60
li
meetings-
they
are
pretty
inclusive,
as
well
as
any
others
kubernetes
meeting,
but
it's
like
it's
really
cool
to
jump
in
and
just
hey.
D
B
Yeah
so
yeah
this
is
this
is
amazing
teams
because
yeah
the
first
table
that
we
wrote.
Actually
it
was
inspired
on
the
way
the
docker
workers
right.
So
when
you
have
an
out
of
memory
as
an
example,
I
was
out
of
memory
or
something
like
that
or
you
get
a
q-9
what
docker
and
kubernetes
they
do
is.
Is
they
turn
that
code
into
into
128
plus
the
number
from
from
internal?
B
So
you
know
that
the
as
it
code
was
from
the
container
on
time
and
not
from
the
container
itself
itself
right
as
if
I,
if
I
record
correctly,
it's
that
and
we
do
have
a
table
of
exit
codes
in
tldp.
B
I
guess
it's
referenced
on
the
cap,
but
with
reserved
codes
and
codes
that
can
that
can
be
used.
They're.
Not
so.
The
first
approach
that
I
took
actually
was
hey.
I
want
to
have
like
a
return
call
that
represents
as
an
example,
something
that
that
happened
on
validation
and
then
something
that
happened
on
on
the
server
side
and
etc,
and
discussing
inside
six
cli.
B
They
gave
me
an
insight
that
hey
you
just
need
to
be
careful
to
not
make
that
too
broad,
because
sometimes
it's
hard
to
know
what
was
actually
the
real
problem
from
the
api
server.
So
you
don't
want
to
have
an
as
it
called
before,
because
you
get
like
a
nexus
denied
and
another
one,
because
the
the
research
doesn't
exist
on
the
server
because
from
the
server
the
it's
returned
actually
as
the
same
thing
from
the
api
server.
B
So
we
try
to
make
it
concise,
but
at
the
same
time
we
try
to
make
something
that
we're
something
that
can
be
meaningful
to
people
all
right,
not
just
random
as
it
codes,
but
something
that
you
may
have
a
table
and
say
hey.
This
was
from
the
server.
This
was
from
the
validation
or
I
got
that
from
the
external
code,
which
was
like
a
q-9
in
my
plugin
or
d,
or
something
like
that
make
sense.
A
So
I
think
this
is
what
you're
talking
about
right
like
so
in
the
exit
code
conventions
they've
mentioned
zero
one
and
three
as
their
exit
code
numbers
and
they
basically
like
zero,
is
success
and
non-zero
is
failure,
but
like
what
and
how
isn't
really
specified
or
like
there's,
no
concrete
way
to
know
it.
B
Yeah,
so
this
is
this
is
actually.
This
is
actually
interesting,
because
some
sometimes
it
calls
they
they
actually
don't
mean
they
mean
that
it
wasn't
expected,
but
not
air.
So
as
an
example
diff
it,
it
does
have
a
like.
If
you
run
the
common
diff
to
see
differences
between
files
or
gtf,
or
something
like
that
is
it
called
two
doesn't
mean
that
you've
got
a
network.
It
means
that
you've
got
a
difference.
B
B
We
were
having
success
because,
like
you've
got
two
different
manifests
an
example,
but
for
the
ci
as
a
for
like
a
a
continuous
integration,
it
was
like:
hey,
hey,
you
got
an
error
right
because
it's
different
from
zero
and
it
wasn't
that
thing
so
so
this
is
the
idea
actually
of
splitting
as
well
and
getting
the
proper
exit
code
from
the
forked
process
to
to
say.
Okay,
you
can
parse
that.
B
B
Correct
we
should
avoid
yeah,
but
even
this
is
hard,
because
this
is
for
from
bash.
What
happens
if
you
are
using
fish
or
zsh
right
yeah,
so
this
is
yeah.
This
is
pretty
hard.
It's
it's
hard
didn't
know
how
hard
it
is
to
keep
comparability
between
things
right.
The
next
one
is
docker
team,
so
right,
yeah.
B
A
Okay,
docem:
let's
let
me
pull
up
the
timer
once
again.
A
A
Okay,
I'm
assuming
that's!
No,
so
are
there
any
questions
so
thereby
you
are
the
communication
shadow
right
this
time
and
release.
So
how
are
things
going
with
the
docker
shim
part.
E
So
at
present
we
don't
have
any
such
burdens
like
we
don't
have
anything
to
do
at
present,
like
the
announcement.
Please
just
happen
so
after
that
like
and
the
blogs
are
getting
written
like
we
don't
have
much
blocks
prepared
by
now
to
communicate.
I
think,
after
that,
we'll
be
probably
by
mid
february
next,
two
weeks
after
next
two
weeks,
we'll
be
having
a
lot
of
things
to
cops,
especially
with
the
blogs.
All
that
coming
up
sounds.
A
Good,
so
I
had
a
few
questions
so
with
respect
to
from
like,
basically
whatever
has
happened
so
far
right
so
deprecate.
One
part
then
make
sure
there
are
no
dependencies
still
that
exist,
and
things
like
that
now
that
docker
shim
is
finally
removed
that
pr
has
merged.
Finally,
what
is
left
in
terms
of
work
to
be
done,
because
I
see
there
are
a
few
ci
failures
happening
here
and
there,
because
the
shift
to
container
d
needs
to
be
done
and
folks
are
working
actively
on
that
as
well
right.
D
The
main
thing
that
we'll
end
up
facing
with
docker
shim
is
there'll,
be
a
bunch
of
people
who
haven't
followed
up,
who
didn't
care
and
like
who
will
like
scream
bloody
murder
when
the
time
comes
to
upgrade
to
124,
right
and
there's
nothing.
D
We
can
do
about
those
people
at
this
point
other
than
you
know,
keep
things
ready
in
terms
of
like
materials
that
we
can
point
them
to
and
when
they,
when
they
show
up,
we
gotta
you
know
say
here
is
a
material
here
is
how
you
switch
here
is
how
you
install
container
d
and
things
like
that,
but
there
will
be
a
lot
of
people
that
will
be
affected,
because
you
know
they
will
have
implications
on
like
they
would
they
wouldn't
want
to
bring
their
workloads
down?
D
That
kind
of
scenarios
will
be
there
and
they'll
be
worse
off,
so
the
mitigation
factor
is.
We
still
have
three
releases
that
up
to
three
releases
that
we
support
for
one
more
year,
so
there
is
more
time
for
them,
so
they
cannot
say
hey.
I
want
all
the
new
features
in
124
plus.
I
don't
want
to
move
the
docker
so
he's
like
you
pick
your
poison
right
like
if
you
want
to
do.
If
you
want
to
use
all
the
things
that
comes
in
124,
then
hey
you,
don't
have
a
choice.
D
You
know,
docker
shim
is
gone,
switch
to
continuity
or
cryo,
so
that
is
basically,
you
know
the
voice
that
we
need
to
tell
them
like.
All
of
us
will
have
to
tell
the
same
story
right
so
that
that's
going
to
be
the
most
important
part
here
right.
So
we've
been
shouting
from
the
rooftops
forever
now
right
and
if
people
are
still,
you
know
not
able
to
make
the
switch,
then
there
is
going
to
be
a
problem
for
them
right.
D
So
the
other
thing
that
we
we
probably
have
to
invest
more
is
we
are
trying
to
do
a
bunch
of
things
from
the
kubernetes
side
of
things
right
like,
but
we
still
are
not
doing
much
in
from
the
creole
side
or
the
continuity
side
where
they
are
advocating.
You
know
they
are
coming
up
with
like
hey.
This
is
the
runbook
for
you
to
switch
over
from
docker
to
continuity
right
like
so.
Those
communities
also
need
to
step
up.
You
know
because
there
will
be
more
people
knocking
on
their
doors.
D
Simple
examples
would
be
hey,
I'm
using
the
special
docker
json
config
thingy
for
specifying
ssl
certificates
for
my
private
registry.
How
do
I
do
it
in
continuity?
Right
so
and
then
I'm
using
this
specific
set
of
flags
when
I
bring
up
docker,
you
know
how
do
I
configure
continuity
to
do
exactly
the
same
thing
that
I
was
doing
with
docker?
D
So
then
the
other
set
of
things
that
people
will
end
up
with
is
hey
I'm
using
this
third-party
container
and
it
works
fine
with
docker,
but
it
doesn't
work
fine
with
plain
container
d
right.
So
so
there
is
some
some
amount
of
like
install
steps,
configuration
steps
and
then
troubleshooting
steps
will
end
up.
They'll
they'll
have
to
go
through
that
process
of
you
know
certifying
for
their
own
companies.
So.
D
More
traffic,
around
issues
and
prs,
and
just
chatter,
in
the
continuity
channels,
for
example,
when
people
have
to
make
the
switch
right
like
so,
they
are
kind
of
ready
for
it.
But
I
there-
I
I
don't
see
too
much
of
like
effort
being
put
by
the
other
communities,
because
even
cri
dr
d,
for
example,
right
they
barely.
They
have
one
tag
that
they're
tagged
and
they
are
not
ready
for
it
either
when
people
come
knocking
on
their
doors.
D
So
we
have
to
go
to
these
communities
and
tell
them
to
like
hey
buckle
up.
Things
are
happening
and
you're
going
to
be
facing.
You
know
influx
of
people
who
are
coming
to
do
stuff
and
they'll
need
your
help.
D
A
Yeah,
that's
true,
so
another
question
that
I
had
was:
there
was
like
a
survey
done
that
I
read
about
like
in
one
of
those
links
right,
so
there
were
around
600
responses
collected
from
that
survey.
So
is
this
service
sent
to
individual
vendors
and
like
it's
one
response
for
vendor
or.
D
It
we
don't
know,
we
just
throw
it
out
and
whoever
responds
responds
right,
and
you
know
there
might
be
a
lot
of
people
who
have
already
made
the
change
for
them.
This
is
a
know,
so
you
we
won't
get
to
hear
from
those
people
right.
Typically,
it's,
like
you
know,
yelp
or
google
reviews
right,
like
all
the
people
who
had.
B
C
D
C
D
C
D
D
D
D
Like
we
never
used
to
do
any
of
these
before
right,
like
maximum,
what
we
would
do
is
like
send
a
note
to
kubernetes,
dev
and
say:
hey,
you
know
if
somebody
comes
back
later
say:
hey
look,
there
was
the
email
that
was
sent
to
kubernetes
dev
three
months
ago,
and
you
know
you
were
not
following
that.
So
what
can
we
do
right
like
so?
Most
of
this
is
like
training.
D
The
people
who
are
coming
to
use
kubernetes
right
here
to
read
the
release,
notes
do
the
things
that
are
mentioned
there
pay
attention
to
what
is
being
deprecated
right.
So
we
gotta
just
keep
banging
those
drums
at
this
point
and
we
still
have
time
right
like
124
is
coming
in
april,
something
so
there's
still
a
couple
of
months
for
us
to
keep
doing
this
yeah,
but
they
will
definitely
be
pissed
off.
People
have
the
end
of
the
things,
there's
nothing
we
can
do
about
it.
A
That's
true
well,
another
question
I
had
was
so
I.
A
To
be
sure,
because
the
latest
one
that
I
saw
so
cri
right
now
is
in
beta
right,
if
I'm
not
mistaken,
and
it's
not
g8
yet.
D
Yeah
the
label-
ga
is
not
there,
but
for
all
practical
purposes
it
is
frozen
yeah.
But,
yes,
there
is
a
couple
of
other
things
that
people
want
to
modify
add.
So
it's
going
to
be
augmenting
rather
than
modifying
and
the
other
twist
here
is
typically
when
we
talk
about
version,
skew
and
api
structure
and
things
like
that,
we
do
it
for
the
things
that
are
served
by
the
api
server
and
not
for
grpc
right.
D
So
for
grpc
we
do
not
have
yet
a
set
of
best
practices.
So
to
say,
we
always
assume
that
adding
a
field
in
grpc
should
be
okay.
Adding
a
new
method
should
be
okay,
but
changing
the
signature
might
be
troublesome
in
some
cases,
but
might
not
be
troublesome
in
other
cases,
for
example,
if
typically
the
client
does
not
send
the
field
and
we
deleted
the
field.
It's
fine
there's
some
some
things
like
that.
So
at
this
point
we
still
have
a
few
more
issues
there
on.
D
How
do
we
support
multiple
versions
of
grpc
from
the
container
d
side,
for
example?
Right
so
on
kubernetes
side?
I
think
if
we
look
for
we
look
for
the
newer
one.
If
the
newer
one
is
not
there,
then
we
go
to
the
older
one,
something
of
that
nature,
but
on
the
container
is
say
how
do
we
support
both?
At
the
same
time,
just
in
case
there
are
clients
that
are
written,
that
use
the
cra
api
and
they
they
can't
move
to
the
newer
one.
Yet.
A
Yeah
yeah,
that
makes
sense
just
one
last
question
that
I
had
was
in
terms
of
the
vent,
the
dependencies
that
were
being
vendored
in
so
moving
forward,
assuming
that
a
few
of
those
dependencies
were
actually
used
by
some
part
of
the
code
base
right.
So
I'm
seeing
discussions
around
a
separate
org
or
an
internal
log
being
created
for
the
repos
v4.
A
So
there's
like
a
I
forgot,
the
repos
name.
I
think.
A
A
Yeah,
so
was
that
did
that
discussion
start
because
of
that
we
didn't
we
needed
to
start
talking
about
what
would
happen
to
those
dependencies
after
it
goes
away,
or
is
it
like
independent
and
yeah.
D
This
was
basically
independent,
crack
been
going
on
forever.
We
have
you
know
so
when
we
want
to
vendor
something.
We
have
a
few
choices.
One
choice
we
have
is:
is
it
an
independent
api
small
enough
that
we
can
stick
into
like
kto
utils.
A
D
So
that
is
one
choice.
The
second
choice
is:
is
this
like
a
third-party
thing,
then
there's
a
third-party
folder
in
kubernetes
quantities.
So
that's
that
was
the
second
choice
and
the
third
choice
was
just
creating
repositories
in
kubernetes
or
kubernetes.
Six
gita
bar,
so
examples
that
you
would
see
is
like
k-log
was
fork
from
g-log.
D
D
D
You
know
so
we
we
were
trying
to
use
internal,
as
you
know,
kto
internal
slash
distribution
and
we
were
trying
to
do
it
that
way
and
that
didn't
work
out
either
because
it
needs
to
be
kate,
cyo,
slash,
kubernetes,
slash
internal,
which
then
means
that
it
is
served
off
of
the
kubernetes
kubernetes
repository
itself,
so
yeah,
so
we
have
to
you,
know,
go
back
to
the
drawing
board
and
see
if
there's
anything
else
that
is
coming
down
the
pipe,
but
worst
case
situation,
just
just
for
the
distribution
stuff.
D
We'll
probably
you
know,
pick
one
of
the
existing
options
and
not
right
near
this
point,
because
deadlines
are
coming
up,
you
know
so
you
it
would
be
good
to
have
the
dependency
stuff
sorted
out
by
end
of
february.
D
So
if
you
need
to
create
new
or
ci
jobs,
you
know
things
like
that:
make
files,
docker
images,
anything
that
requires
a
bunch
of
things
to
be
done.
We
should
try
to
do
it
as
early
as
possible,
which
is
part
of
the
reason
why
docker
shim
removal-
I
did
it
even
before,
like
one
week
after
the
last
race
was
cut
and
before
people
ended
up
going
to
you
know
on
vacation
for
the
last
two
weeks
of
december,
so
there
was
a
lot
of
time
to
stabilize
things.
D
Otherwise
you
know
typically
we'll
be
like
hey
it's
it's
now,
it's
too
late.
Let's
wait
for
the
next
release
right.
C
D
We
we
ended
up
doing
things
like
that
too,
and
going
back
the
g-lock
to
k-log
also
happened
similar
way.
You
know
just
before
a
vacation.
You
know
we
sorted
things
out
and
pushed
it
out.
So
when
people
came
back
from
vacation,
you
know
they're
ready
to.
We
have
some
amount
of
work.
That's
already
done,
and
people
can
pick
it
up
and
with
that.
A
Got
it
that
sounds
good
okay,
so
I
I
don't
have
any
more
questions.
Does
anyone
have
last
minute
questions?
Otherwise
we
are
almost.
E
Just
just
a
simple
doubt
like
when
we
are
writing,
like
you
know,
there
is
a
built-in
sim
which
we
are
supporting
for
docker
and
removing
it.
I
had
one
doubt
since
all
these
container
technologies
comes
under
oci
initiative.
Now
doesn't
the
sim
that
we
were
supporting
should
be?
You
know
universal
to
all
the
container,
like
maybe
container
d
or
cryo,
or
everything,
because
I
I
like
I'm
writing
a
blog
on
how
to
shift
your
container
engine
from
in
a
node
from
docker
to
container
d.
E
I
I
realized,
like
you
know,
when
we
try
to
install
docker
and
everything
behind
the
scenes.
There
is
container
d
so
why
this
same
thing
is
not
universal
and
we
can't
support
it
as
oci
supports
a
runtime
spec
for
the
file
system.
Doesn't
it
support
the
sim
also?
D
So,
for
that
there
is
a
long
history
right
like
docker
came
first,
the
oca
came
after
right,
like
yes,
when
when
people
were
looking
at
docker
and
trying
to
see
how
you
know,
docker
works
and
other
vendors
also
wanted
to
participate
in
the
ecosystem,
and
so
they
said,
okay,
fine.
What
are
the
things
that
docker
is
doing,
that
we
can
standardize
so
that
we'll
have
like
a
same
interface
kind
of
thing
to
work
with
so
oca
images
are
probably
what
you
are
referring
to.
D
You
know
what
is
what
is
the
structure
of
the
image
because
everybody
wants
to
produce
images
and
those
images
should
be
consumed
by
docker
or
continuity
or
cryo.
So
then
that's
where
they
standardize,
they
standardize
the
image
format.
How
does
the
image
look?
What
are
the
different
layers
in
the
image
and
things
like
that?
D
But
for
the
longest
time
you
know,
docker
didn't
want
to
replace
their
api
layer,
they
have
a
rest
api
and
they
didn't
want
to
change
that,
and
there
was
consensus
outside
of
docker
that
hey
we
need
to
have
an
api
that
does
not
rest
api.
That
is,
you
know
that
can
take
in
more
stuff.
So
grpc
was
the
choice
there.
D
D
So
we
will
be
able
to
call
you,
so
that
was
like
part
of
the
history
on
how
why
why
things
are,
and
even
though
it's
still
labeled
like
alpha
beta,
it's
like
all
practical
purposes,
ga!
So
you
know
we
just
didn't
bother
to
update
the
labels
on
it
and
since
everybody
started
using
it,
it's
like
de
facto
ga
at
this
point.