►
From YouTube: 20200916 Cluster API Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
thank
you,
hello.
Everyone
today
is
wednesday
september
16th,
and
this
is
the
cluster
api
office
hours.
Cluster
api
is
a
subproject
of
sig
cluster
lifecycle.
Please
follow
the
cncf
code
of
conduct.
Make
sure
that
you
raise
your
hand
if
you
want
to
speak
and
be
kind
to
one
another.
A
If
you're
able
to.
Can
you,
please
add
your
name
to
the
attendee
list
on
the
document,
I
pasted
the
link
in
the
chat
and
if
you
have
any
discussion,
topics
that
you'd
like
to
bring
up
go
ahead
and
add
them
with
your
name
under
the
list,
looks
like
we
have
a
few
already
so
we'll
go
ahead
and
get
started
all
right,
so
first
off
psas
for
pg.
You
want
to
take
this
one.
B
Yeah,
thank
you
hi
everyone.
So
yesterday
we
created
an
just
to
inform
everyone
that
we
created
a
new
on
demand
and
one
job
that
you
can
use
to
run
the
full
copy
and
to
end
test
with,
and
in
order
to
trigger
this,
this
job
you,
you
should
simply
issue
test,
pool
cluster
api
and
to
end
full.
A
All
right,
thanks
for
the
heads
up
so
we'll
move
on
to,
I
don't
think
we
have
any
other
psas
vince
andy
anything
else.
C
Oh
well,
so
0310
is
slated
for
the
end
of
the
month.
There
have
been
like
a
few
pr's
and
memory
leak
that
we
found
out
like
with
andy
and
fabrice
and
yasin
they
have
all
been
merged.
There
is
one
more
pr
than
is
prepping
to
rewrite
some
parts
of
the
cluster
tracker,
which
is
the
the
code
that
actually
looks
at
the
workload
clusters.
C
D
Could
it
be
just
an
rc
instead
of
a
beta?
I
know
it
doesn't
really
make.
D
Yeah,
I
did
just
get
a
request
to
edit
this
document.
Please
follow
the
instructions
at
the
top
of
the
document.
Thank
you.
A
Yeah,
you
can
just
click
this
link
and
add
yourself
to
the
mailing
list.
That's
faster
all
right,
thanks
andy,
so
I
guess
we'll
move
on
to
the
discussion
topics,
and
actually
I
have
the
first
one.
A
So
I
don't
know
if
james
james,
the
phyllis
is
here,
but
we
were
talking
in
fleck
and
there's
this
proposal
or
not
the
proposal
but
doc
that
was
started
by
media
a
while
back
about
doing
some
bootstrap
failure,
detection
from
the
infrastructure
providers,
and
we
concluded
that
it
was
very
hard
to
do
this
in
a
like,
crowded
agnostic
way
and
that
if
we
were
gonna
use,
you
know
things
like
a
daemon
set
or
things
like
that.
A
It
wasn't
gonna
work
for
nodes
that
weren't
given
the
chance
to
join
the
cluster.
Yet
if
things
completely
failed
in
cloud
init
and
the
other
thing
is
that
we
also
want
to
make
sure
that
whatever
solution
we
get
to
is
bootstrap
provider
agnostic
in
the
sense
that
it
doesn't
expect
anything
cubadiem
specific,
it
doesn't
rely
on
post,
cube,
adm
commands
and
it
also
works
with
other,
like
bootstrap
mechanisms,
not
just
cloud
init.
A
So,
given
that
jack
from
microsoft,
admob
he's
here
jack-
and
I
wrote
doc
in
cab
z
land
to
propose
how
we
could
do
this
with
the
azure,
like
azure
specific,
like
tools
that
we
have
available
to
us
and
what
we
conclusion
we
came
to-
and
it
seems
like
james
also
came
to
that
same
conclusion
independently-
is
that
we
would.
A
We
could
really
use
from
like
a
signal
in
the
bootstrap
provider
contract
that
tells
us
like
bootstrap
is
complete
and
it's
up
to
the
bootstrap
provider
to
define
what
that
means
for
that
specific
provider
to
be
complete,
but
an
easy
way
to
do.
This
would
be
to
have
something
like
a
sentinel
file
that
is
written
on
the
on
the
machine
or
just
touched
like
when
the
bootstrap
process
is
complete
and
then
we
can
hand
it
off
to
the
infrastructure
from
there
and
the
infrastructure.
A
So
yeah
you
see.
F
So
in
general
I
would
I
would
agree.
It's
gonna
be
interesting,
though,
for
providers
that
don't
have
a
way
to
access
the
machines,
because,
like
I
guess,
for
example,
for
for
cap
v,
we
do
have
a
way
to
access
our
machines
without
having
to
ssh
into
them.
I
guess
for
asia,
you
must
have
something
similar,
but
for
providers
that
don't
have
any
way
to
access
machines
either
through
api
or
something
like
that,
would
they
require
ssh
access
for
them.
A
So
when
I
say
sorry
when
I
say
write
the
file,
actually
I
don't
mind,
I
don't
mean
the
bootstrap
provider
with
being
the
one
actually
to
write
it,
but
it
would
give
the
instruction
so,
for
example,
for
kubernetes
and
bootstrap
provider
we
would
add,
like
a
command
in
the
cloud
init
script
that
gets
sent
to
the
infra
provider.
That
includes
the
instruction
to
write
that
file
after
it's
done
doing
all
the
other
commands,
so
it
wouldn't
necessarily
be
the
bootstrap
provider
doing
the
ssh
and,
like
writing
to
the
machine.
D
It
probably
needs
to
be
optional
and
the
the
sshing
or,
however,
it's
accessed
is
done
by
the
thing
that
cares
about
checking
on
the
status.
G
Would
we
want
that
to
live
in
the
cluster
itself
instead
on
the
so
that
there
isn't
a?
This
is
an
external
it
almost
feels
like
it
could
be
eventing
on
the
bootstrap
provider
to
say
these
things
are,
are
complete
and
then
the
infrastructure
providers
could
watch
and
wait
on
that.
I
don't
know
just
just
a
thought.
Sorry,
coming
in
late.
D
So
that's
a
really
good
idea
and
I
wish
it
were
that
easy,
so
the
bootstrapping
happens
inside
the
system,
so
inside
a
virtual
machine,
a
bare
metal
machine
whatever-
and
there
is
no
universal
way
for
code
running
in
cluster
api
to
reach
in
and
figure
that
out
and
there's
also
no
universal
secure
way
to
have
the
bootstrapping
code.
That's
running
inside
the
system
to
communicate
back
with
the
management
cluster
to
say,
I'm
done.
A
Yeah-
and
I
also
want
to
answer
the
or
something
that
lubimir
said
in
the
chat
about
infrastructure
providers
being
able
to
do
that
today,
we
can't
assume
cubadiam
in
the
infrastructure
providers.
That's
the
whole
point
where
it
needs
to
be
part
of
the
contract.
We
can't
run
like
qbm
in
it
from
the
infrastructure
provider
side,
because
I
would
assume
that
we're
using
cubadium
for
bootstrapping,
which
might
not
always
be
the.
A
A
So
yeah,
I
just
wanted
to
bring
this
up
if
anyone
has
any
like
thoughts,
they
want
to
discuss
with
me
after
this
feel
free
to
reach
out,
but
otherwise
this
will
probably
take
the
form
of
a
follow-up
issue.
Yes,
vince.
C
Would
be
worth
to
kind
of
like
start
adding
this
to
the
provider
that
we
have
today
and
fall
back?
If,
like
we
don't
find
the
file,
I
mean
we
could
also
like
do
a
file
in
a
trap
to
like
say
like
this
has
failed,
like
so
kind
of
flipping
it,
but
then
we
will
get
some
signal
now
how
we
would
implement
that
before
we
write
the
cap,
if
that
makes
sense,.
A
Yeah
we
can
do
this.
The
problem
is
that
if
the
file
is
not
there,
then
you
don't
know
if
it's,
because
the
file
wasn't
written
or
if
it's,
because
it
actually
failed,
and
but
the
whole
point
is
that
you're
gonna
say
after
time.
If
you
don't
find
that
file,
you're
gonna
be
able
to
say,
okay,
bootstrap
did
not
succeed,
it
didn't
send
me
the
signal.
I
was
expecting.
C
C
A
We
could
do
that
that
would
kind
of
like
put
like
an
artificial,
like
exit
code
to
cloud
in
it
which
it
doesn't
have
right
now,
but
the
thing
my
only
concern
with
that,
I
guess,
is
that
it
might,
you
know,
not
work
for
every
bootstrap
provider.
I
don't
know
if
there
would
ever
be
a
bootstrap
provider.
That's
not
able
to
do
this.
It
kind
of
imposes
the
like
trap
exit
on
the
bootstrap
provider,
which
is
a
bigger
implementation
detail
than
just
writing
a
file
on
success
yeah.
A
C
Yeah,
we
can
think
about
it.
I
just
wanted
to
point
out
like
because
it
would
be
great
to
like
see
the
different
implementation
that
infrastructure
provider
will
put
in
place
and
that
if
we
have
any,
you
know,
shortcomings
or
if
some
information
probably
cannot
like
absolutely
do
this
like.
We
might
have
to
think
in
like
another
way
or
something.
A
A
Okay,
I'm
gonna
move
on
because
we
have
a
lot
of
other
topics.
I
don't
want
to
take
too
much
time.
Okay,
so
zack!
You
have
something
for
us.
If
you're
here.
H
Hey
yep,
so
I
don't
have
ever
formally
introduced
myself
to
this
community,
but
I'm
zach.
I
work
at
microsoft,
not
alongside
the
tap
z,
folks
that
you
might
know
more
in
the
community,
but
I
work
with
another
team
and
I
think
on
the
call
we
have
dinesh
and
we
have
modern
and
on
pm
we
have
david
and
we've
been
working
on
a
cluster
api
provider
for
azure
stack
hci.
H
Now,
if
you
don't
know
what
azure
stack
hdi
is
we'll
have
david,
do
a
short
presentation
of
it
in
a
little
bit,
but
yeah
we're
happy,
and
now
it's
finally
open
sourcing,
our
cluster
api
infrastructure
provider
we've
been
working.
I
guess
around
the
scenes
of
the
community
for
the
last
couple
months.
Maybe
posting
you
know
certain
issues
or
but
not
really
fully.
H
You
know
contributing,
but
we
would
love
to
now
help
out
and
just
have
our
code
out
in
the
open,
and
you
know
act
as
the
other
providers
do,
but
yeah
so
for
questions
about
azure
stack,
hci
I'll
now
quickly
get
you
to
david,
who
has
a
short
presentation
about
that.
If
he
cares.
A
Maybe
I
think
we
have
time
for
that.
Let
me
just
I
think
you
need:
can
anyone
share
their
screenprints
or.
A
It
okay
thanks,
go
ahead,
you
should
be
able
to.
C
I
All
right
can
everyone
see
my
screen:
yep,
okay,
great
yeah.
So,
as
zach
mentioned,
I'd
first
like
to
start
out
give
a
little
bit
of
background
on
what
is
azure
stack,
hci,
so
azure
stack.
Hci
is
a
solution
that
basically
bundles
various
microsoft
technologies
for
compute
storage
and
network
virtualization
and
optimal
way
to
deliver
a
hci
environment
with
kind
of
better
performance.
I
That,
hopefully,
reduces
costs
of
operating
similar
infrastructure
that
is
using
all
these
technologies
in
a
less
optimally
configured
way
it's
most
popular,
but
not
limited
to
certainly
to
users
and
companies
with
it,
teams
that
already
have
some
level
of
familiarity
and
affinity
with
this
with
these
technologies,
but
at
the
same
time,
also
have
a
desire
to
you
know,
consolidate
existing
server
infrastructure,
maybe
unlock
easy
access
to
some
azure
cloud
services
and
perhaps
refresh
aging
hardware.
I
So
this
offering
basically
enables
you
know
provisioning
and
bringing
up
linux
and
windows
vms,
and
it
also
runs
a
dedicated
kind
of
azure
stack,
hci
os
that
is
based
off
of
windows
server,
and
so
our
motivation
was
to
you
know,
basically,
just
like
many
other
infrastructures
and
environments.
Users
are
creating
kubernetes
clusters
on
azure
stack,
hci
to
orchestrate
their
containerized
applications,
and
we
want
to
simplify
and
streamline
the
experience
of
managing
the
lifecycle
of
kubernetes
clusters
on
azure
stack,
hci
environments.
E
I
Basically,
our
this
provider
is
a
v1
alpha,
free
aligned
provider
and
one
of
the
interesting
things.
If
I
let's
say
I
go
to
show
the
templates.
I
Oh,
I'm
sorry
try
that
again.
I
Give
an
overview
of
one
of
the
templates
animals
azure
stack.
Hci
machines
can
be
of
different
os
types
like
linux
or
windows,
and
we
have
been
working
closely
together
with
you
know:
sig
windows,
community
and
kubernetes,
as
well
as
the
windows
operating
system
team,
to
actually
refine
the
bootstrapping
procedure
of
windows
machines
and
ensure
we're
using
the
best
recommended
practices.
I
So,
like
a
lot
of
testing
and
development
went
into
getting
this
right
and
we
actually
also
have
a
first.
You
know
experimental
attempt
at
the
retry
join
procedure
for
windows.
I
That
we'd
love
to
you
know
probably
raise
a
feature
request
to
see
if
it's
possible
to
add
this
retry
logic
to
the
cube,
adm
bootstrap
provider
and
yeah,
but
the
we'd
love
to
share,
basically,
all
of
our
all
of
our
learnings
that
we
did
here
and
that
we
incorporated
into
this
provider
to
help
speed
up
windows,
development
efforts
for
other
providers,
especially
since
there's
a
proposal
now
on
kind
of
how
to
provision
workload.
I
Okay,
can
you
see
my
presentation
again?
Yes,
okay,
yeah,
so
there's
getting
started
guide
that
we
added
and
some
docs.
I
However,
in
the
getting
started
guide,
there's
a
placeholder
right
now
to
basically
set
up
set
up
the
azure
stack,
hci
environment
correctly
and
there's
basically,
some
agents
that
still
need
to
be
published
and
they
need
they
will
be
published
next
week,
so
that
people
can
actually
run
this
project.
So
I
just
wanted
to
mention
that.
I
I
So
any
questions
on
anything
I
showed
you
today.
I
So
no-
and
I
guess
from
my
side,
you
know
assuming
this
infrastructure
provider
adheres
to
the
provider.
Spec
follows
the
right
conventions
around
naming,
etc.
What
are
what
are
the
different
layers
of
approval
that
it
would
need
to
go
through
to
be
maybe
added
to
the
list
of
infrastructure
provider
implementations
in
the
cluster
api
book.
C
H
Yeah,
I
just
want
to
say,
excited
to
be:
you
know
openly
part
of
the
community
now
and
thanks
everyone
for
your
hard
work
on
the
upstream
side.
So.
A
All
right,
let's
move
on
then
mike
otis
gallery
proposal.
K
Yeah,
hey
everybody.
So,
a
couple
months
ago,
my
colleague
mike
gugino
opened
this
pr
that
I
linked
here
adding
an
enhancement
for
scale
from
xero,
and
we
had
some
good
discussion
there,
but
I'm
seeing
an
uptick
in
requests
from
the
auto
scaler
side.
People
are
kind
of
curious
about
when
scale
from
zero
is
coming,
and
you
know
they
know,
we
can
do
it,
and
so
I'm
I'm
curious
if
people
might
be
willing
to
take
another
review
on
that.
K
I
think
where
we're
stuck
right
now
is
that
we
need
to
decide
how
we're
going
to
annotate
the
information
about
cpu
gpu
memory
in
a
way
that
the
auto
scaler
can
get
that
information.
I
think
andy
kind
of
helpfully
pointed
out
there
that
that
information
is
available
now
in
the
cube
adm.
What's
it
called
the
kubereum
config
template,
but
there
was
some
pushback
about
whether
we
wanted
the
auto
scaler
to
have
to
dig
that
deep
into
the
cappy
implementation
to
get
this
information.
K
K
You
know
just
at
the
machine
set
layer,
but
you
know
I
that
that
may
not
be
appropriate
for
what
we
want
to
do
here,
and
so
I'm
just
curious.
K
If
anybody
has
thoughts
about
this
and
then
you
know,
maybe
how
could
we
kind
of
push
this
forward
to
the
next
level?
And
I
there's
one
more
issue
too
about
taints
as
well.
You
know
how
a
user
could
could
kind
of
have
these
taints
available
as
well.
That's
another
piece
of
information.
The
auto
scaler
needs
to
know
so
yeah.
So,
like
I'm
curious
to
hear
any
thoughts
or
you
know,
if
we
don't,
we
don't
need
to
spend
a
ton
of
time
on
this.
A
Thanks
andy,
you
have
your
hand
raised.
D
Yeah,
I
need
to
read
through
it
again.
I
thought
at
one
point
we
had
talked
to
mike
about
possibly
having
the
data,
be
it
an
annotation
or
whatever
get
placed
on
the
appropriate
resource
or
resources
after
you
had
at
least
one
instance.
So,
like
some
component
beat
the
auto
scaler
or
whatever
could
go,
find
an
actual
machine
for
the
machine
set
and
then
figure
out
its
characteristics
and
save
that
information
as
an
annotation
or
whatever
on
the
machine
deployment
or
machine
set,
and
then
the
auto
scaler
could
use
that.
D
I
think
the
only
case
that
doesn't
work
with
is,
if
you
start
with
zero
replicas
on
your
machine
deployment,
there's
no
way
to
get
that
information
there,
unless
you
manually
put
it
there
or
have
some
tie
in
with
an
infrastructure
provider.
That
knows
what
it
needs
to
do
and
where
to
put
that
information.
K
Yeah,
I
I
think
you
would
have
that
comment
in
the
pr,
and
you
also
mentioned
too
about
there
was
no
answer
to
this,
but
you
know
kind
of
the
the
infrastructure
provider
should
be
creating
this
information
based
on
the
type
of
machines
that
you're
requesting.
So
I
think
you
know
I'm
I'm
totally
happy
to
have
it
done
that
way.
K
D
I
think
if
we
think
about
it
from
a
contract
perspective
and
also
think
about
it
from
the
v1
alpha
4
perspective,
where,
hopefully,
we
can
add
more
spec
fields
where
needed
instead
of
using
annotations.
K
Right
and
and
that's
kind
of
like
the
contract
that
you
that
your
provider
then
does
support
like
scaling
to
and
from
zero
all
right.
This
is
good.
I
guess
the
next
action
for
me,
then,
is
I'll,
go
back
in
and
kind
of
look
at
the
infrastructure
providers
and
see-
if
maybe
I
can
answer
some
of
these
questions,
but
if
we're
cool
with
adding
information
to
the
machine
set
or
machine
deployment,
then
that
that's
a
great
that's
at
least
a
way
forward.
L
Yes,
so
I
haven't
gone
through
the
complete
proposal,
but
for
both
of
the
points,
so
I
think
it
would
be
very,
very
convenient
if
we
have
this
information
on
the
machine
deployment
rather
than
the
kbdm
config
part
and
second
part
about
fetching
the
instance
details
so
is
basically
what
I
understand
from
the
experience
is
that
it
basically
requires
for
each
instance
how
much
allocatable
cpu
will
be
there
and
how
much
allocatable
memory
and
gpu
support
right.
So
I
see
in
auto
scalar
already
all
the
providers
are.
L
All
the
providers
already
have
the
script
which
generates
the
entire,
which
generates
one
file
which
has
all
the
vm
instance
types
and
necessary
details
in
it.
So
either
we
can
reuse
that
or
should
probably
be
okay.
If
we
can
simply
import
in
our
provider
and
maintain
it
there,
the
the
reason
I
think
they
the
reason
reason
it
is
being
done.
This
way
is
probably
it's
not
easy
to
fetch
such
information
at
runtime
for
all
the
providers.
K
Now,
you're
exactly
correct
hardik.
That's
that's
the
reason
why
the
other
providers
in
the
auto
scaler
like
encode
these
as
tables,
basically
that
they
look
up.
I
think
my
understanding
of
best
practice
in
the
auto
scaler
is
that
we
probably
shouldn't
be
looking
into
other
providers
information
to
inform
what
we're
doing
we
should
be.
You
know
our
providers
should
be
bubbling
that
information
back
up
through
the
api
objects
to
the
auto
scaler.
K
So
I
I
have
a
feeling
that
we
might-
I
haven't,
looked
at
all
the
providers,
but
I
have
a
feeling
that
we
might
have
some
of
these
tables
in
our
providers
already
to
do
these
lookups
about
what
the
type
of
machine
is
so
yeah.
We
wouldn't
be
querying
the
cloud
api
at
at
runtime.
These.
This
information
should
already
exist
once
the
provider
is
running.
C
Yeah,
oh,
I
had
a
question
like
do.
You
have
strong
opinions
to
use
annotations
versus
like
either
specker
status
in
alpha
4.?
I'm
asking
this
because
annotations
are
usually
hard
to
be
versioned
and
like
dealt
with
with
conversion
workbooks,
and
I
would
strongly
prefer
to
like
have
something
on
status
or
spec.
Instead,.
K
If
possible,
yeah
I
mean
I
know
this
is
a
this
can
be
a
contentious
topic.
I
do
not
happen
to
have
a
strong
opinion,
one
way
or
the
other
I
I
know
there
are
strong
opinions
about
whether
we
should
be
using
annotations
or
not.
You
know,
as
I
mentioned
before,
in
openshift,
we
chose
to
use
annotations
because
they're
easy
to
add
on,
but
I
think
it
would
be
perfectly
acceptable
for
us
just
to
update
our
crds
to
contain
this
information.
K
You
know
so
we
can
have
proper
zero
values
and
validation
and
everything,
and
then
the
yeah
at
the
auto
scaler
side
it'll
just
be
easy.
We'll
just
have
a
reference
into
the
api,
and
you
know
we
can
just
pull
this
information
on
I'd
be
happy.
E
K
C
K
F
K
Okay,
so
just
to
repeat
kind
of
what
I'm
hearing
here,
I'll
I'll
explore
kind
of
doing
this
as
a
crd
change,
we'll
add
these
fields
somewhere.
I
guess
we'll
figure
that
out
in
the
review
probably-
and
I
need
to
do
a
little
research
too,
but
so
if
everyone's
cool
with
adding
it
to
the
crd
for
v1,
alpha
4
I'll,
explore
that
and
just
see
if
I
can
get
the
pr
updated
and
then
I
don't
know,
maybe
start
to
put
forth
some
patches
to
kind
of
see
how
it
works.
A
All
right,
let's
move
on
to
the
windows
cap,
pr
james.
M
Hi,
I
just
wanted
to
call
attention
that
I
opened
up
the
pr
there's
quite
a
few
comments
on
the
google
doc.
I
think
I
addressed
quite
a
few
of
them,
but
if
you
please
take
a
look
at
the
pr,
there
hasn't
been
a
whole
lot
of
movement
on
it
since
I
opened
it
up
last
week,
but
please
take
a
look,
and
so
since
we
now
have
a
infrastructure
provider
that
has
some
window
support.
Look
forward
to
your
comments
on
that
as
well.
M
M
Getting
a
few
comments
on
the
pr
I
guess
the
other
thing
I
mentioned
is
I've
also
started
to
work
on
the
image
builder
image
for
in
the
scripts
that
would
set
up
the
image
the
windows
image
so
hopefully
have
something
open
up
for
that.
Maybe
next
week.
A
Sounds
good
thanks
for
the
reminder.
I
myself
need
to
go
and
take
a
look
at
it.
Do
you
want
to
set
up
a
lazy
consensus
timeline
like
we
have
for
other
pr
proposals,
or
is
it
too
early
for.
M
A
Okay,
you
know
what
let's
just
give
people
until
like
next
office
hours
like
next
wednesday
and
then
if
by
then,
there
hasn't
been
much
movement,
we'll
maybe
set
a
lazy
consensus
date
how's
that
and
your
objections.
A
Like
no,
I
just
meant
like
until
next
week
next
wednesday
and
then
like
just
like
leave
it
open
and
then
next
wednesday,
we'll
set
the
date.
Depending
on
how
many,
oh.
C
A
Comments:
okay,
thank
you
all
right,
brian,
with
a
demo.
N
N
Happening
okay,
so
I
made
a
couple
of
slides
I'll
go
very
quickly.
Basically,
I
started
here
with
all
these
diagrams
in
the
cluster
api
book
and
there's
all
these
like
four
different
controllers
talking
to
each
other,
and
I
didn't
really
understand
what
was
going
on.
N
N
N
I
have
added
a
little
bit
of
code
to
each
of
the
four
controller.
This
is
the
the
diagram
out
of
the
book,
the
the
architecture
diagram
on
the
right
and
I've.
I've
done
it
in
the
docker
provider
for
ease
of
demoing,
so
they
spit
stuff
into
a
collector
and
that
puts
stuff
on
the
screen.
N
So
now
I
am
in
visual
studio
code
and
yeah
okay,
so
I'm
going
to
run
a
cluster
cuttle
to
configure
a
cluster
just
a
one,
node
cluster.
To
avoid
tempting
fate.
I
ran
it
earlier
and
I
captured
that
stuff
in
a
file,
so
I
could
look
at
it
in
a
second
but
right
now,
I'm
just
gonna
yolo.
N
So
we
have,
you
know
a
machine,
that's
being
provisioned
and
we
could
go
look
at
the
logs
or
whatever.
But
oh
hey.
I
have
to
change
window
again,
did
not
practice
this
in
a
mode
where
I
can't
change
window
easily.
N
Anyway,
let's
go
over
here
to
jager
and
say
what
have
you
got?
Jager
has
captured
stuff
from
all
these
different
processes
and,
if
you're
not
familiar
with
jaeger,
basically
there's
like
a
zoomed
out
view
here
and
a
zoomed
in
view
here
and
what
we
can
see
is
cluster
cuttle
kicked
off
in
operation
and
then
all
the
other
controllers
kind
of
reacted
to
that
they
started
doing
their
thing,
which
is
mostly
reconciling,
and
if
we
look
down
a
bit,
the
capd
controller
is
creating
a
machine.
N
Now
this
is
not
kind
of
streaming.
So
I
need
to
refresh
just
to
see
if
things
have
moved
on
a
little
bit.
N
But
anyway,
so
this
this
is
the
machine
being
created
by
the
capd
controller,
and
you
know
that's
pretty
much
the
high
level.
I
can
talk
for
another
three
hours
on
this
subject.
So
why
don't
you
tell
me
what
you
want
to
know.
N
Well,
why
are
you
thinking
of
questions?
Let
me
just
point
out
a
couple
more
things.
So
the
let's
see
so
the
machine
itself
was
created
by
the
control
plane
and
I've
logged
the
the
various
kubernetes
api
operations
that
happen
inside
the
controller.
So
it
created
an
object
and
that
then
kicked
off.
N
Then
the
next
well
hang
on,
probably
not
that
one
anyway,
that
creating
an
object,
kind
of
passes,
the
context
through
to
the
next
step
in
in
the
process
and
what
else
is
interesting
right
down
here,
we're
actually
executing
the
commands.
I
logged
them
out
as
well,
so
you
can
see
the
cube
adm
in
it
comes
out.
N
So
I
did
this
because
I
personally
was
uncertain
what
was
going
on
inside
my
provider
and
the
cappy
upstream
providers,
and
so
on,
they're
all
kind
of
talking
to
each
other
and
making
changes,
and
so
on.
I
yeah.
I
do
think
it
could
be
useful
for
lots
of
different
things
for
kind
of
figuring
out
like
someone
mentioned
performance
in
slack,
but
just
like,
like
you
know,
why
is
it
not
working
kind
of
questions.
P
Okay,
let's
do
the
window
change
shuffle
again.
Sorry,
I
didn't
raise
my
hand
according
to
protocol.
I'm
sorry,
no
problem.
N
Oh
yeah
bro
you're
right.
So
let's
take
a
look.
Oh
let's
see
if
I
know
how
to
do
this,
so
basically
what
I've
done
is
I've.
I've
hooked
him
in
a
couple
of
places.
I
used
jaeger,
I'm
familiar
with
jaeger,
but
this
this
could
be
anything
in
that
vein.
N
N
There's
one
of
these
where
we
pick
up
a
a
tracing
span
from
inside
the
object
and
that's
that's
a
technique
that
was
in
one
of
those
caps
that
well
the
kept
that
I
flashed
up
the
url
to
earlier,
and
then
we
we
generally
log
a
couple
of
things,
but
that's
pretty
much
it
with.
N
Let
me
go
back
to
the
main
program,
so
the
other
cool
thing
that
happens
is
we
wrap
the
runtime
client,
where
this
is
the
the
new
manager
setting
up
the
manager
for
the
for
the
runtime
controller
and
and
we
have
a
function
already
in
in
this
controller.
But
what
I
did
was
I
wrapped
that.
N
So,
let's
take
a
look
at
that
code,
and
basically
this
is
where,
if
I
scroll
down
this
is
where
all
the
logging
of
get
list
create
and
so
forth
is
going
on
and
also
where
anytime
we
create
an
object.
We
add
that
that
annotation
to
it
so
that
gets
us
the
flow
through
the
layers.
Q
N
Let's
take
a
look
at
that
file,
I
saved
earlier
that's
what
this
looks
like
it's
kind
of
ugly,
but
it's
in
good
company
in
in
the
metadata
section
of
a
kubernetes
yaml,
and
so
this
is
like
this
is
the
binary
format
of
a
jaeger
trace
context.
N
You
don't
need
to
understand
that
it's
an
annotation
which
we
love
and
yeah
the
code
is,
is
kind
of
attaching
the
including
cluster
cuttle.
I
I
put
the
same
thing
in
there
to
kind
of
kick
it
off
and
then,
as
as
it
goes
down
through
the
controller's.
A
Context,
you
said
that
you
did
this
to
get
a
better
idea
of
how
the
controllers
interact
with
each
other.
Like
all
of
that,
did
that
do
you
feel
like
that?
Helped
you
get
that
or
because
I
feel
like
this
is
really
useful
for
performance
and
like
detailed
tracing.
But
did
you
also
get
like
a
good
view
of
like
the
different
interactions
and
overall.
N
Yeah
I
I
fixed
like
numerous
bugs
in
our
own
cappy.
I
didn't
actually
introduce
myself,
particularly.
I
worked
for
waveworks
and,
and
so
we
have
a
what
we
call
the
existing
infrastructure
provider
and
yeah
I
mean
it
was
actually
failing
in
a
couple
of
different
ways
that
I
didn't
realize
until
I
I
kind
of
looked
at
what
was
going
on
in
the
traces.
Oh,
this
guy
expects
me
to
put
a
value
here.
N
So,
yes,
very
much,
oh
right,
so
what
do
I
want?
Why
am
I
doing
this?
Why
am
I
telling
you
all
this
so
number
one
is
suggestions
for
what
to
do
next
and
and
the
other
number
two
is
ideally,
some
of
this
would
go
upstream
into
controller
runtime
libraries.
A
Oh
lubamir.
E
Probably
the
acoustic
repair
materials
are
going
to
recommend
the
so-called
kype,
which
is
the
costa
repair
proposals.
It
helps.
I
had
a
separate
question
that
did
you
manage
to
benchmark
the
overhead
of
the
racing
itself.
N
Well
formally,
no,
I
mean
that's,
that's
been
done.
I
guess
the
basically
a
few
well
there's
two
there's
two
kind
of
levels
to
this
question
that
if
you,
if
you
don't
turn
it
on
like,
if
you
never
wanted
tracing,
then
it's
going
to
go
through
code
like
this,
where
we
do
a
little
bit
of
of
kind
of
look
up
of
metadata
and
so
on.
So
there's
a
little
bit
of
extra.
N
Basically
stream
stream
manipulation
type
manipulation
going
on
there,
which
is
then
discarded
because
we're
not
tracing
and
and
then,
if
you
do
turn
on
tracing,
then
then
what's
happening
is
if
every
span
is
going
into
a
udp
packet
and
being
fired
out
to
a
local
collector.
N
So
I'm
confident
in
saying
that
the
the
overheads
are
very
low,
but
but
they're
they're
clearly
not
zero.
I
mean
in
context,
there's
there's
a
ton
of
other,
you
know
type
manipulation
and
and
string
manipulation,
and
so
on
going
on,
there's
there's
objects
being
converted
into
objects
into
json
into
yaml.
Q
It's
probably
like
not
a
use
case
where
you
would
want
to
sample
either
because,
like
this
is
not
a
high
traffic
web
service,
the
right
controllers
are
dealing
with
a
small
number
of
objects.
So
you
could
probably
get
all
of
the
data
and
that's
pretty
high
value.
N
Yeah,
but
typically
that
sampling
is,
is
is
deferred
to
something
in
the
infrastructure
that
understands
you
know
what
you
want
to
look
at,
so
you
know
it
can
be
adaptive.
It
can
be.
If
you
did
have
that
problem
that
that
can
be
solved
in
the
tracing
infrastructure.
It
doesn't
have
to
be
solved
in
the
code
here.
M
Q
N
Yeah,
I
I
repeat
that
that's
taken
from
an
existing
cap,
let
me
maybe
put
that
slide
on
the
screen.
N
So
the
the
annotation
is
described
in
this
cap
and
implemented
in
this
one,
which
is
a
mutating
admission
controller,
which
I
I
did
not
use.
I
I
did
the
hooking
on
the
in
the
code
as
the
object
is
created
again.
I
could
talk
about
that
for
an
hour,
but
the
the
ideas
basically
come
from
from
here
and
what
I
did
was.
I
implemented
it
for
cappy,
because
because
this
this
implementation
was
done
for
scheduling
and
creating
pods
and
so
on,
hasn't
been
done
before
for
cappy.
N
Well,
yeah,
I
guess
I'll
I'll,
probably
write
a
cape
and
get
some
more
feedback
on
that.
I
think
the
the
the
sort
of
utility
code
I'll
probably
make
a
pr
into
controller
runtime
for
that,
because
it's
it's
usable
even
if
it
doesn't
go
anywhere
by
default
and
there's
kind
of
harder
things
I
mean
I
I
made
a
slide.
N
Both
the
events
would
be
quite
cool
to
pick
up
in
the
timeline,
as
in
the
event
recorder,
api
and
and
the
log
messages
that
are
already
being
logged
by
the
controllers
and
neither
of
those
take
a
context.
So
it's
it's
basically
impossible
in
go
to
hook
them
into
the
tracing
spans.
N
So
that's
that's
a
slightly
bigger
exercise
to
campaign.
To
get
I
mean
the
event.
Apis
is
kind
of
obvious
that
it
can
take
a
context
and
no
one's
gonna
die,
but
logging
might
be
a
harder
battle,
oh
yeah,
and
then
but
the
the
I'm
not
going
to
change
screen
again.
So
the
bit
where
you
come
in
the
top
of
the
reconcile,
routine
and
and
basically
every
bit
of
code,
that's
copy
and
pasted
from
the
same
place
says
context
equals
context.background.
N
A
Sounds
good.
I
just
want
to
give
enough
time
to
fabulous
to
talk
about
the
cluster
kettle
operator.
So,
let's
move
on,
if
you
don't
mind
all
right,
I'll
share
again,
okay,
for
me,
too,
you
want
to
give
us
some
updates.
B
B
The
the
main
decision
that
that
we
took
during
this
meeting
is
that
in
scope
for
the
management
cluster
operator
are
search,
manager
providers
and
all
the
configs
that
are
required
to
install
providers,
while
out
of
scope,
are
cluster
templates
and,
as
a
consequence,
also
the
the
the
move
operations
of,
at
least
for
the
first
initial
spike
that
that
we
want
to
to
do
the
the
next
steps
are
to
formalize
goals
is
in
a
document
in
an
archive
and,
and
we
should
start
learning
on
api
that
basically
describe
the
the
object
in
charge
for
the
operator.
A
Any
questions
about
the
operator
or
the
discussion,
actually
I
have
one,
did
you
consider
putting
the
cluster
resource
set
in
scope
for
that
or
were
there?
Was
there
any
discussion
about
that?
In
the
meeting
I
haven't
watched
the
recording
yet.
B
No,
we
did
not
consider
because,
at
least
in
my
opinion,
customer
resource
sector
are
kind,
a
part
of
what
is
included
in
in
the
cluster
template.
So
for
me
it
is
how
it
is
out
of
scope.
A
Okay,
we
should
talk
about
that,
because
I
was
talking
with
sadaf
yesterday
and
we
were
thinking.
My
impression
was
also
that
the
cluster
resource
that
should
be
part
of
the
cluster
template
and
that
you
have
one
per
workload
cluster,
but
actually
it
seems
like
it's
not
necessarily
like
that,
and
you
should
maybe
have
one
cluster
resource
set,
that
you
apply
to
your
management
cluster
and
then
that
can
be
reused
by
multiple
workload
clusters.
A
So
you're
like
the
infrastructure
templates
that
are
part
of
like
infrastructure
provider
releases
in
the
flavors,
don't
necessarily
have
cluster
resource,
that's
in
them,
and
so
right
now
that
would
require,
like
an
additional
manual
step
from
the
user,
to
go
and
apply
that
cluster
resource
sets
before
they
create
the
clusters
to
the
management
cluster.
So
that's
why
I
was
thinking
that
might
be
a
good
use
case
for
the
cluster
cattle
for
cluster
cuttable
in
general
and
for
the
operator
so
that
you
can
have
that
cluster
resource
set
or
those
cluster
resources.
B
Yeah
we
can
take
this
offline.
My
my
hot
day,
my
initial
reaction
is
that
since
for
this,
let
me
say:
share
the
customers,
who
said
you
only
need
to
apply,
but
then
there
is
no,
no
other,
my
maintenance
that
you
have
to
do
on
on
this
thing.
So
it's
basically
it's
nothing
that
should
be
operated,
something
that
you
should
apply,
but
but
let's
take
this
offline
and
see
if
we
have
a
shared
understanding
of
the
use
case.
A
Okay
sounds
good
all
right,
any
other
questions
for.