►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
David,
hey
Daniel,
sorry
I'm,
going
video
silence
for
a
while
here:
okay
I'm
on
ice,
okay,.
A
I,
don't
know
if
there's
anyone
who
is
attending-
oh
I,
actually
did
I
didn't
officially
start
the
meeting.
I'm.
Sorry,
now
that
there
is
this
sort
of
claim,
host
functionality.
I
knows
my
fault,
so
welcome
to
the
provider
implementers
office
hours
for
January,
15th,
2019,
12
p.m.
PST.
If
you
are
attending,
please
add
yourself
to
the
attendees
list
in
the
agenda
doc
and
if
you
have
any
topics
or
questions
would
be
great,
if
you
can
add
them
there
as
well
and
I
will
post
a
link
to
that
duck
in
the
zoom
chat
right
now.
A
A
Was
wondering
if
there's
anyone
in
attendance
right
now
who
is
interested
in
the
generic
or
a
generic
cluster
API
provider?
So
that
is
a
provider
that
could
be
used
across
multiple
environments,
for
example,
environments
that
maybe
have
infrastructure
api's
like
ggp,
AWS
or
perhaps
even
an
environment
that
doesn't
have
an
infrastructure
API.
A
Absolutely
so
today
the
there
are
individual
providers
that
are
specifically
tailored
to
environments.
So,
for
example,
there's
an
AWS
providers,
an
openstack
provider,
there's
an
azure
provider
and
these
providers
will
bring
up.
You
know
they.
They
will
call
the
the
respective
api's
for
those
environments
and
they
will
bring
up
infrastructure
and
not
just
the
machines
but
also
things
like
load,
balancers
and
maybe
networks,
but
each
of
those
each
of
those
providers
is,
is
you
know
specific
to
to
that
environment?
A
And
there
are
no
providers,
for
example,
that
just
target
you
know
or
I.
For
example,
there
there's
no
provider
that
targets
a
an
environment
where
there
is
no
infrastructure
API.
So
that
is
you
know.
The
act
of
creating
a
machine
would
be
something
that
may
be
an
administrator
actually
I,
like
a
person
has
to
do.
A
A
B
I'm
not
sure
what
he
sense,
I'm
prepared
to
really
dive
into
the
details
of
this
in
the
meeting.
But
this
is
something
that's
come
up
in
the
them
multiple
times
and
the
primary
clustered
API
meeting.
And
the
idea
is
what,
if
there
were
an
alternative
way
to
extend
cluster
API
providers
such
that
not
everybody
had
to
write
an
actuator,
but
instead
you
could
use
web
hooks
as
a
different
than
extension
mechanism.
B
So,
historically,
when
you
look
at
the
providers
that
are
the
sort
of
the
lowest
common
denominator,
there
have
been
multiple
SSH
providers
that
have
been
written
and
that
are
all
typically
built
around
ku
medium
and
that
cdat
and
things
like
that.
And
so
the
idea
behind
a
generic
provider
is
basically
that
we
would
like
to
be
able
to
share
the
provisioning
of
kubernetes
on
nodes
which
on
nodes
which
are
created
somehow.
B
But
we
would
like
the
how
those
nodes
are
created
to
be
pluggable,
perhaps
with
Web
books,
so
that
multiple
bare
metal
environments
or
multiple
environments
without
shared
or
common
infrastructure
can
use
the
same
provider
and
only
the
webhook
mechanism
that
allocates
the
machines
would
differ.
Depending
on
what
your
bare
metal
environment
or
your
particular
environment,
looks
like.
A
Is
there
a
specific
use
case
that
that's
important
for
you?
So,
for
example,
is
yeah
some
environment,
maybe
maybe
an
environment
there
where
there
is
no
it
over.
There
are
no
infrastructure
api's,
or
maybe
it
just
like
the
ability
to
use
the
same
provider
across
you
know
across
different
public
clouds.
C
Well,
we're
we're
looking
at
different
clouds
all
at
the
same
time,
so
I
mean
where
we
were
doing
things
before.
Looking
at
cluster
AP
I
was
doing
a
similar
thing
where
we're
trying
to
set
up
the
same
way
to
build
our
cluster
on
top.
But
then
we
were
plugging
in
different
ways
to
build
the
infrastructure
underneath
it.
C
D
C
D
B
That's
a
great
question
and
it's
one
of
the
two
alternatives
that
I've
considered
so
after
seeing
your
talk,
Justin
another
legitimate
way
to
make
it
easier
to
write.
Actuators
would
be
to
extend
COO
builder,
as
you
guys
did,
with
add-ons
operators
such
that
there's,
a
new
pattern,
cluster
API
provider,
and
they
could
generate
all
of
the
scaffolding.
For
you,
one
of
the
downsides
of
doing
that
is
that
COO
builder
generates
the
code
once
actually
I
guess.
Maybe
there
are
no
downsides.
You
can
still
import
some
other
library.
They
can
be
updated.
B
I'm
curious
what
the
what
the
advantage
of
web
hooks
are
as
an
extension
mechanism
over
actuators
and
one
reason
might
be
that
a
web
hook
is
easier
to
write.
So
even
if
there
generates
all
of
the
scaffolding
understanding
the
operator
model
and
level
based
triggers,
for
instance,
the
entire
infrastructure
that
may
be
conceptually
more
difficult
than
understanding
how
to
implement
the
simple
web
book
write
a
simple
server
request
response.
D
That
is
probably
true,
although
I
would
say,
we
probably
sought
to
explain
to
people
that
they
have
to
make
sure
that
their
web
hooks
are
idempotent
type
things
right.
We
still
have,
as
you
some
some
of
that,
but
we
can
certainly
help
them.
Do
it
the
other.
The
other
huge
advantage
of
web
hooks
is
that
if
you
have
a
ruby
provisioning
system,
it
is
much
easier
to
read
a
web
book
right
than
it
is
to
write
the
implement
of
web
book
than
it
is
to
write
a
controller
and
Ruby.
D
But
I
would
really
like
to
see
the
variety
of
things
that
were
actually
talking
about
in
terms
of
the
potential
controllers
and-
and
it
may
be
that
we
can
deal
with
it
on
a
case-by-case
basis,
but
yeah
I'm
not
I'm,
not
fundamentally
opposed
to
to
putting
a
web
hook,
except
that
I,
don't
I,
don't
understand
why
it
would
have
to
be
in
the
or
I
think
it
could
be
done
in
a
separate
repo
in
a
separate
provider.
Yes,.
B
D
There
have
been
some
repo
or
organizational
challenges
around
figuring
out
where
the
the
code
that
extends
controller
runtime
will
live,
which
is
I,
think
that
more
challenging
bit.
If
you
want
to
extend
that
in
terms
of
the
framework
for
coop
builder,
I
am
Not
sure
we
haven't
actually
pushed
on
getting
that
merge
until
we
got
the
other
ones
done
I,
don't
it
has
it's
not
honestly
it
everyone
needs.
It
will
be
easier
when
the
second
implementation
is
in
there.
D
I
think
it
is
not
extensible
at
the
moment
or
it's
not
designed
for
for,
like
it's
not
like
a
full
templating
system.
So
we'll
have
to
see
where
that
goes.
It
may
well
be
easier
to
create
our
own
scaffolding.
Type
thing
we'll
have
to
see
it
has
the
potential
the
implementation,
because
they
only
have
one
implementation
at
the
moment-
is
not
there
yet
so,
but
I'm
it'd
be
great
to
put
in
the
work
in
the
health
and
help
them
get
there.
I
don't
think
we're
there
yet
see.
B
D
Some
less
than
one
Li,
but
we
haven't,
got
it
merged
yet,
but
it's
it's
not
hard,
the
the
tricky
bit
for
for
the
the
add-on
operators.
Work
was
figuring,
like
you
know,
building
the
library
itself
figuring
out
what
that
library
looks
like
the
common
the
framework
the
common
functionality
looks
like
and
and
then
building
that
up,
and
mostly
actually
around
requirements
more
than
around
implementing,
like
the
code
is
not
itself
answer
to
be
complicated.
B
D
Think
I
have
hacked
all
right.
I
have
hacked
up
one
myself
and
I've
done
it
in
a
single
file,
so
it
may
be
that
it
is
like
the
scaffolding
may
just
be
overly
complicated,
I.
Just
not
sure,
I
think
it's
multiple
files
anyway,
but
we
can
certainly
we
can
look
at.
We
can
look
at
making
it
easier.
The
provider
spec
is
I,
think
the
tricky
bit
so
yeah.
D
B
So
we're
looking
at
managed
control
planes.
In
particular,
we
looked.
We've
been
looking
at
the
Gardiner
model.
Where
do
they
have
a
single
@,
CD
server
and
it's
run
in
a
manager
cluster
with
frequent
backups
and
snapshots
now
I,
actually
the
detail
about
a
single
@
CD
instance,
or
multiple
at
CD
instances?
That's
not
the
salient
point
that
that
I
want
to
ask
about
what
I
wanted
to
ask
about
is:
does
the
at
CD
ADM
work
will
that
allow
at
CD
to
be
installed
as
a
pod
and
a
cluster
meaning
not
a
static,
manifest.
D
I
think
it
I
think
it
should
I,
don't
think
we've
scoped
it
yet.
But
yes,
one
of
the
things
we've
done
I've
talked
about
is
so
we
have
Etsy
TDM
the
CLI,
which
is
basically
the
manual
mode
of
operation.
We
have
EDD
manager,
which
is
for
cops
uses
to
do
it
automatically
we're
trying
to
get
the
automated
mode
building
on
top
of
the
CLI.
So
you
can
see
everything.
One
of
the
ways
we
thought
of
when
we
were
sort
of
brainstorming
to
put
that
on
a
firm
foundation
is
to
build
an
operator.
D
D
B
D
Yes,
because
yeah,
so
we
work
we're
imagining
three
modes
of
operation.
Well,
so
we
have
a
mode
of
operation.
We
know
where
it's
manually,
driven
by
the
CLI.
We
have
a
mode
of
operation
where
it's
fully
automated
and
like
it
relies
on
external
sources.
Truth
like
some
cloud
API
to
to
drive
that
automatically
and
then
the
proposed
second
mode
would
be
another
more
automated
mode
of
operation
where
the
source
of
truth
is
the
kubernetes
api,
which
is
an
operator
right.
B
A
Excellent
so,
but
I
just
want
to
clarify
it
specifically
in
the
in
the
gardener
model,
where
the
control
plane
for
a
cluster,
including
a
CD,
runs
in
a
different
cluster.
Everything
that's
paused
today
right.
They
think
that
I
think
gardener,
I,
assume
the
gardener
uses
the
sed
operator
and
is
I
I,
mean
I,
think
I,
think
Vic.
A
A
A
D
My
my
view,
and
is
only
my
view,
is
that
we
should
in
the
NCBA
and
see
the
AVM
project
creating
that
CD
operator
that
uses
at
CD
ADM,
so
that
we
have
consistent
operation
and
consistent
backups
and
snapshots,
and
you
can,
like
you
know,
take
it
from
bare
metal
and
installed
it
on
a
my
gardener,
cluster
or
whatever.
It
is,
and
we're
gardener
to
bare
metal
or
gardener,
the
AWS
and
all
these,
and
we
have
one
sort
of
code
path,
just
because
or
one
their
Road
code
path.
D
Just
because
it's
a
little
bit
tricky
and
my
understanding
is
the
sed
operator
itself
might
not
be
fully
supported
anymore.
Right.
I
also
think
it
will
help.
They
will
have
our
project
at
CDI
DM
on
the
automated
side.
If
we,
if
we
have
an
implementation,
that
that
has
a
reliable
source
of
truth
like
we
can
like
really
get
that
behavior
figured
out
and
follow
the
patterns
which
are
established
and
then
take
those
to
the
less
rely.
All
sources
of
truth,
where
it's.
D
A
I
agree,
I
think
what
I
think
in
in
that
scenario,
kubernetes
forum,
SED,
ADM
perspective
and
kubernetes,
is
simply
another.
You
know,
source
of
truth
plus
api
to
you
know,
maybe
deploy
deploy
it.
You
know
deploy
new
members
right
where
I
it
could
be
kubernetes
or
it
could
be
an
ec2
where
your
you
know,
their
source
of
truth
is
the
list.
A
You
know
a
list
of
instances
reported
and
then
you
can
maybe
bring
up
another
instance
to
spin
up
a
new
member
or
something
something
like
that
is
that
is
that
kind
of
what
yeah,
okay
yeah,
but
but
as
far
as
I
mean
as
far
as
timelines
David.
If
you're,
if
you're
asking
you
know,
okay,
can
you
can
you
use
it
tomorrow?
I,
don't
think
that
I
don't
think
it's
it's
like
it's
gonna
happen
overnight
and
I
I,
don't
think!
D
Definite
chat
with
the
gardener
folks
about
that.
My
when
I
looked
at
the
sed
operator,
it
had
some
odd
behaviors
around
like
if
you
lost
or
the
behavior
there,
they
may
have
fixed
it
or
they
may
not.
If
except
I,
don't
know
so.
B
D
B
D
D
But
you
actually
said
something
she
didn't,
which
is
like
that.
It's
clear
that
we
need
the
like
rely
on
the
cloud
I'm
actually
like
the
more
I
am
using
this,
the
more
the
more
I'm
trying
to
like
polish
up
at
CD
manager,
the
more
I'm
like.
Oh,
it's
such
a
pain
like
it'll,
be
so
much
nicer,
they're,
just
the
the
gardener
model,
where
you
have
a
single
at
CT
API
and
you
bootstrap
from
there.
It's
feel
so.
D
In
other
words,
if
I
have
one
node,
I
can
bring
up
a
TD
on
that
no
problem
using
a
CD
VM.
If
I
then
bring
up
a
Canary's
control,
plane
and
schedule
an
STD
operator
that
is
reliable
and
works
everywhere,
and
if
I
can
somehow
make
it
that
I
don't
die
when
my
single
node
Etsy
D
goes
away,
then
maybe
we're
okay.
So
that's
the
Gardner
model
and
that's
well.
That's
my
m7.
The
garden
wall.
I've
mentioned
the
cluster
API
pivot
model
as
well,
and
so
that's
that's.
D
What
I'm
I'm
also
like
looking
at
that,
but
I'm
still
still
we
need
it
to
get
from.
We
could
get
cops
users
from
where
they
are
today
into
the
future,
but
maybe
we,
maybe
if,
if
the
gardener
a
model
takes
off,
we
we're
able
to
just
have
the
operator
four
or
more
people
use
the
operator
or
bear
or
bear
bear.
Cli
then
use
other
purchase.
But
again
we'll
see
the
Tran
being
mean
that
I
need
the
management
layer,
but
maybe
we
maybe
we
use
it
as
a
transition
to
get
people
onto
other
architectures.
B
So
one
of
the
reasons
to
run
at
CD
on
dedicated
Hardware
outside
of
kubernetes
is
to
avoid
circular
dependencies.
Once
you
have
a
manager,
cluster
and
managed
clusters,
you
you've
avoided
the
circular
dependency,
and
so
now
you
only
have
to
confine
yourself
to
worrying
about
reliability,
which
is
something
you
always
have
to
worry
about
it
anyway.
So
by
running
at
CD
and
as
an
operator
with
access
to
things
like
ooh
proxy,
it
makes
log
aggregation,
easier,
metrics
and
uniform,
etc.
D
D
D
Yeah,
yes
and
I.
Think
then
that
ties
into
the
reliability
question,
which
is
how
do
we
ensure
that
mini
cube,
doesn't
or
kind
doesn't
act
as
a
single
point
of
failure
and
that's
where
you
get
the
pivot
or
what
Dinah
and
I
were
talking
about?
The
restart
which
is
about
like?
Can
we
get
away
with
super
Jew?
A
strap
snapshots
like
what
is
I
just
need
to
play
with
that
I.
Don't
know
if
you've
played
with
that
at
all
chef
Elise,
oh
I'm,
sorry,
but
checkpointing
I,
don't
know
if
you've
played
with
that
at
all.
B
D
Theory
it
might,
it
might
take
care
of
some,
but
all
of
the
failure
cases
so
like
we
take
care
of
the
I
restart
I
hit
the
reset
button
on
my
machine.
Does
my
pod
come
back
without
the
control
plane
like
without
a
control
plane?
The
answer
is
normally
no,
but
with
checkpointing.
The
answer
should
be
yes,.
A
B
A
D
And
the
other,
the
other
approach
is
to
figure
out
a
way
such
that
you
don't
care
or
such
such
that
your
you
bring
up
an
ephemeral
one
and
that
it
goes
when
it
goes
away.
You
are
ok
with
it
right,
so,
in
other
words,
you
solve
the
bottom
turtle
by
saying
the
bottom
turtle
is
temporary
and
we
have
ways
to
bring
up
temporary
clusters.
That
are,
you
know,
meaning
you
kind
whatever.
It
is,
but
that's
again
pretty
tricky.
Oh.
A
At
that
point,
you
have
to
bring
up
that
clusters,
control,
plane
and
storage
plane,
but
otherwise
you
know
you
you
can
you
can
sort
of
bring
those
bring
those
down
and
have
those
you
know,
I
have
those
not
be
around
and
and
rely
on
the
the
checkpointing
to
ensure
that
that
you
know
what
once
once
I've
said
that
I
want.
You
know
X
number
of
whatever
pause
for
you
know
for
some
cluster
to
run
here
and
sed
to
run
here.
D
D
You
need
to
do
that
right,
because
the
turtle
that
stands
on
a
turtle
is
okay,
right,
only
the
very
bottom
turtle
that
you
have
to
like
jerry-rigged
type
thing
right,
so
you
wouldn't,
for
example,
let's
think
if
we,
if
we
ignore
resources
on
that
control
plane,
you
wouldn't
need
to
bring
up
the
control
plane.
You
wouldn't
need
to
bring
up
the
ephemeral
control
plane
just
to
launch
a
new
cluster.
A
Yeah
I
I,
think
I
think
the
the
Gardner
model,
and
you
know
whether
it's
the
ring
or
whether
it's
just
in
temporary
cluster
that
we're
talking
about
it,
seems
like
there's,
there's
quite
a
bit
of
complexity.
I,
guess
it's
not
clear
to
me.
You
know
if,
like
what
is
the
worst
possible
failure
and
and
how
do
you
were
like?
Can
you
recover
from
that?
Can
you
can
you?
D
I
guess
we're
working
towards
like
the
femoral
control
plan,
which
goes
away,
the
ephemeral
control
damage
pivots
and
then
the
like
seda
DM,
where
it
like,
brings
up
a
normal
like
traditional
all-in-one
control,
plane
I
feel
like
we
they're
all
fairly
easy
to
reason
about
and
think
about.
We
can
come
up
with
a
list
of
things
that
can
go
wrong
and
I
think
that
they
are
reasonably
bounded.
Yeah
I
definitely
agree
with
your
your
approach.
You're
like
what?
What
are
the
failure
modes
and
what
do
we
do.
D
A
This
I
didn't
I,
didn't
get
a
chance
that
I
learned
a
lot
about
the
gardener
model
at
the
at
that
age.
A
meeting
that
that
happened,
that
I
was
in
person
I'd
cook
on
Seattle,
but
one
of
the
things
I
think
it
was
Tim
Sinclair.
A
Actually,
no,
it
wasn't
true
at
all
yeah
I,
guess,
I
guess:
I
had
a
question:
I,
don't
remember.
If
I
got
to
ask
it
actually
it
you
know.
If
I
actually
wait,
David
you,
it
was
your.
It
was
your
question.
You
asked
it.
You
know
the
the
kubernetes
api
is
becoming
a
you
know
place
where
some
critical
add-ons
live
right,
critical
items
that
you
see
are
these
or
that
just
you
know,
need
access
to
to
existing
options
like
it's
geo,
for
example.
Right.
A
If
you
in
the
managed
control
plane,
you
have,
you
know
you're
you're
able
to
lose
the
control
plane.
You
know
due
to
some
network
problem
where
you
know
where
I
was
like
your
your
every
every
all.
The
infrastructure
that
you're
that
you're
cute
with
couplets
run
on
is
is
fine
and
intact,
but
you've
lost.
You
know
when
you've
lost
the
control
planning
to
do
some
network
network
issue.
Is
that
like
it
is,
that
is
that
a
single
point
of
failure?
Is
there
a
way
to
work
around
now
you
do
set
up
redundant
network
links.
D
I
know
what
yeah
I
don't
believe
that
see
that
we
require
like
cross
regional,
like
I,
think
the
answer
would
be
that
you
would
run
your
your
your
management
control
plane
in
your
same
zone
or
even
VPC
or
zonad
VPC
in
native
land.
As
you
would
your
your
like
actual
cluster
and
I
think
there
are
I,
think
there's
a
trade-off
between
I
want
to
put
it
right
by
the
resources
that
I
am
managing
versus
I
want
to
run.
D
You
know,
I
actually
want
to
run
my
control
play
on
Jing
ke,
even
when
I'm
on
you
know
like
every,
even
when
I'm
on
bare
metal
right,
which
which
would
be
easy
for
everyone
to
do,
but
then
you're
imagining
depending
on
gke
and
like
yes,
as
you
say,
like
cloud
connectivity,
is
a
tricky
thing.
There's.
B
Sorry,
if
I'm
a
little
noisy
in
my
environment,
but
so
the
observation
the
Daniel
was
recollecting,
was
that
as
more
and
more
operators
are
written
using
anncr.
These
historically
we've
said
that
you
know
it's
okay.
If
the
control
plane
goes
down
because
your
applications
will
continue
running
and
the
observation
is
that
as
more
and
more
applications
are
written
using
CR
B's,
it
could
be
that
it's
no
longer
true
in
some
environments
or
some
applications
to
keep
running
when
the
control
plane
goes
down
so,
for
instance,
the
custard
API.
B
D
I
would
hope,
I
don't
believe
that
the
gardener
model
requires
a
non
a
che.
A
single
noted
CD.
If
you
know
what
I
mean
I
think
you
could
run
mah
ASCD
on
in
the
gardener
model.
So
me,
the
gardener
model
is
running
your
control
plane
as
pods
as
or
forgot
as
EDD
is
a
stateful
set
I'm
at
safest,
I
could
have
one
or
three
members
or
five
four
pick.
You
pick
your
poison
right.
D
A
A
B
Don't
know
so
this
is
a
when
we
were
doing
the
backlog
grooming
for
you,
one
alpha.
One
I
picked
up
a
couple:
tasks
related
to
better
documenting
the
semantics
around
the
linked
wheat
clusters
of
machines,
and
then
the
link
between
machines
and
nodes
and
roles
are
another
one
of
the
areas
where
I
think
we
need
to
do
a
better
job
of
codifying
the
expectations.
B
So
I
think
I
think
that's
something
that
we
need
in
to
document.
I
I
think
Tim
Hawkins
observation
that
your
honor
environments,
where
roles
either
don't
exist
or
there
are
more
roles,
or
at
least
there.
There
may
not
be
a
common
set
of
roles
between
providers
right
so,
for
instance,
internally
we
have
a
use
case
where
we
have
a
proxy
node
role
and
that's
not
that's
not
generics.
That's
that's,
never
gonna
be
part
of
the
common
code.
B
D
Hey
I
agree
with
that:
I
think
that's
a
great
topic,
maybe
for
tomorrow
and
the
general
audience
I
think
I
think
one
way
we
may
be
out
of
tackle
is
by
thinking
through
sort
of
the
implications
like
you
know.
Does
this
affect
faster
autoscaler?
Well,
we
don't
want
to
like
scale
a
master
when
it
has
to
gonna,
know
it
or
vice
versa
and
or
the
proxy,
and
it
should
be
a
amass
for
the
scale
and
I.
Think
I
agree
with
you
that
it
is
I'm
here.
I.
D
B
So
originally
Vince,
he
opened
an
issue
talking
about
splitting
up
the
control
plane
for
visiting
from
the
cluster
actuator
and
initially
I
was
like
I.
Don't
I,
don't
understand
why?
But
I
think
that
this
would
be
a
good
reason
why?
If
you
had
a
different
actuator
for
worker
nodes,
sorry
for
control,
plane
nodes,
then
you
wouldn't
need
a
concept
of
role,
because
the
actuator
would
implicitly
determine
what
the
role
was.
D
We
should
at
least
figure
out
where
the
rolls
is
a
I
am
a
master
or
I
am
a
node,
whether
it's
a
exclusive
enum
or
a
enum
set
or
open-ended
like
set
of
effectively
like
label
or
tags
right.
So
it's
fine
for
you
to
put
in
proxy,
even
though
no
one
else
knows
what
proxy
is,
and
it
only
has
meaning
inside
of
your
cluster
or
set
of
clusters,
but
it.
The
point
is
that
a
proxy
is
not
a
master.
For
example
like
we
should
figure
out
those
semantics.
If
we
keep
the
rolls
filled.
D
B
Out
and
so
one
of
those
so
I
see
like
two
and
an
a
I'm,
not
there's
so
many
things
to
talk
about
the
mom
I'm,
not
sure
which
ones
will
get
you,
but
in
terms
of
roles.
I
think,
there's
sort
of
two
ways
that
I
see
and
one
is
to
follow
the
commedia
model
of
using
labels
and
paints,
and
the
other
is
to
have
a
different
custom,
API
type
and
an
actuator
which
is
explicitly
for
control
planes.
And
in
that
case
we
don't
even
talk
about
the
rule
problem.
That's
like.
B
If
you
need
a
proxy
role
or
some
you
know
it's
like
that,
you
can
use
labels
paints,
you
can
do
whatever
you
want,
but
for
the
purposes
of
standing
up
with
kubernetes
cluster,
the
worker
and
control
playing
rural
distinction
can
be
solved
through
pipes
or
right
now.
What
Kobe
idiom
does
and
that's
paints
and
labels
and.
D
B
D
D
Yeah
I'd
love
to
see
it
result,
I,
don't
know
how
we
can
even
go
about
that.
But
yes.
A
A
If
you,
if
you
are,
you
can
add
a
linker
okay,
sorry
I
was
not
sorry.
Okay,
yeah
we're
we're
almost
out
of
time.
So
I
just
want
to
see
if
I
guess
we're
still
talking
about
no
rolls.
A
So,
let's
see
Brighton
if
you're
still
on
the
call
I
was
wondering
so
I
think
youyou.
You
briefly
mentioned
says
this
is
the
coming
back
to
the
generic
provider?
Is
our
do
you?
Do
you
have
a
use
case
for
I,
guess,
sort
of
bare
metal
or
some
environment
where
well
bare
metal?
Where
maybe
you
have
you
know,
pixie,
booting
or
or
maybe
even
you
know,
no
infrastructure,
API
whatsoever?
Where
you
know
some
administrator
or
some
some
person
is
going
and
racking.
You
know
machines
and
making
sure
that
they
have
IPs.
A
Okay,
so
I
just
wanted
to
get
that
question
before
before
the
end:
okay,
who's,
there
is
there
sorry,
is
there
anything
else
that
that
we
want
to
discuss
before
we
wrap
up
for
today.
A
It's
someone
going
to
add
the
the
note
rolls
today
to
the
agendas.
That's
something
that
that
that
you
want
to
discuss
Justin
or
do
you
want
David?
Is
that
just
one?
Maybe
so
we
don't
forget,
maybe
we
can
add
it
to
the
agenda
for
tomorrow's
cluster
API,
the
general
meeting
or
anything
else
for
that
matter.
B
So
I'll
check
if
there's
an
existing
ticket
for
no
rules
in
the
Pacific,
a
repo,
if
not
as
an
action
item
I,
can
open
one
and
then
lick
it
to
the
existing
to
medium
thing.
In
on
the
same
subject:
okay,.
A
B
Not
sure
first
I
don't
know
what
attendance
is
going
to
be
like
tomorrow,
with
help
you
out
and
right.
Everything
else
going
on
my
goals
for
tomorrow
are
probably
going
to
be
I
have
a
bunch
of
other,
so
so
I
think
one
of
the
the
main
things
that
we're
gonna
want
to
talk
about
tomorrow
is
really
the
drive
towards
v1
alpha
one.
B
There
have
been
some
grooming
meetings,
but
not
everyone's
attended,
so
I
think
that
at
least
the
sort
of
the
status
report
on
that
is
going
to
be
necessary
and
then,
in
my
own
personal
goal
is
to
start
chipping
away
at
those
those
goals.
So
clarifying
things
like
like
link
between
clusters
of
machines,
etcetera,
no
trolls
is
on
the
list.
I,
don't
know
what
I
already
is.