►
From YouTube: Kubernetes Office Hours 20210818 (EU Edition)
Description
Office Hours is a live stream where we answer live questions about Kubernetes from users on the YouTube channel. Office hours are a regularly scheduled meeting where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on how you’re using Kubernetes, or to just passively learn by following along.
For more info: https://k8s.dev/events/office-hours
A
Okay,
welcome
everyone
to
today's
kubernetes
office,
evers,
where
we
answer
your
user
questions
live
on
the
air
prepared,
steamed
panel
of
experts.
You
can
find
us
in
the
office
hours
channel
on
slack
check
the
topic
for
the
url
for
more
information.
Before
we
begin,
let's
get
started
by
introducing
ourselves
we'll
start
from
the
left
yoga.
Please
take
it
away.
B
C
A
D
Hello,
everyone,
I'm
dan
papandrea
people,
call
me
pop.
I
am
the
director
of
open
source
ecosystem
and
community
for
a
company
called
cystic,
I'm
also
host
of
a
show
called
the
popcast
and
I'm
also
a
cncf
ambassador.
Well,
that
rolled
off
the
tongue
and
hello.
Everyone.
B
Hello,
everyone,
my
name
is
noel.
I
mostly
go
by
fresbo
on
github
and
everywhere
else.
Mostly
cloud
native
enthusiasts
hang
around
in
multiple
communities
trying
to
learn
and
help
each
other.
E
Hey
everybody,
my
name
is
mario
lauria.
I
am
a
cncf
ambassador
and
currently
a
senior
sre
at
cardax.
I
have
been
involved
in
one
way
or
another
and
loved
the
kubernetes
community,
the
you
know
cloud
architecture
and
technical
components
as
well
as
building
resilient
platforms.
I
really
enjoy
that
passion,
of
course,
in
meeting
wonderful
people
helping
out
cubecon
events,
and
things
like
that.
E
So
I'm
super
excited
about
kubecon
coming
in
october,
and
I've
also
had
my
face
featured
on
certain
banners,
as
you
can
see
in
the
office
hours
channel
as
posted
by
pop.
So
I'm
I'm
kind
of
this
semi
infamous
figure.
If
you
will
as
well
so,
which
I
enjoy
so
glad
to
be
here.
D
Little
known
fact
about
mario
he's:
he
he's
the
only
person
I've
ever
seen
in
one
sitting
eat,
probably
a
gallon
of
ice
cream
in
one
sitting.
I
just
want
everybody
to
know
that
I've
I've
met
him
in
like
real
life,
so
I
just
want
everybody
to
know.
That's
that.
E
E
Wasn't
it
sports
dude,
it's
s'mores
man
s'mores,
is
where
it's
at
I'll
get
I'll,
kill
a
gal
out
of
that
no
problem.
So
let's
get
to
the
questions,
though,
there's
that's!
Oh.
E
A
Down
everyone
go
ahead
david.
Thank
you
all,
okay.
So
before
we
start
here
the
guardrails,
this
is
a
kubernetes
event,
so
the
code
of
conduct
is
in
effect,
please
be
excellent
to
one
another.
This
is
also
a
judgment
free
zone.
Everyone
has
to
start
from
somewhere.
So
please
help
out
your
buddy
by
having
and
providing
a
supportive
environment
in
the
channel.
A
We
will
do
our
best
to
answer
your
questions,
but
the
panel
does
not
have
access
to
your
cluster.
So,
of
course,
live.
Debugging
is
not
going
to
happen.
However,
we
will
do
our
best
to
explain
what
may
be
happening
and
get
you
moving
on
to
the
next
step.
Normally,
we
provide
shirts.
However,
the
cncf
store
is
currently
being
replenished
and
we'll
give
you
a
shout
out
in
our
undying
devotion
for
all
the
questions
that
you
provide
panelists,
you
are
encouraged
to
expand
on
your
answers.
A
A
You
can
also
you
can
also
help
us
out
by
tweeting
and
spreading
the
word
and
paying
it
forward.
This
panel
is
made
entirely
of
volunteers.
If
you
want
to
rotate
and
join
us,
please
let
us
know
we'd
love
to
have
a
new
people
join
in
and
help
out
so
each
month
we
take
a
brief
moment
to
thank
a
member
of
the
kubernetes
officers
volunteers
for
their
continued
effort
and
support
of
this
program.
This
is
the
mario
edition
we
want
to
thank
you
for
all
of
your
contributions
to
the
kubernetes
community.
Three
cheers
for
mario.
E
Thank
you
very
much.
I
I
love
office
hours.
I
love
the
community,
I
don't
know.
I
think
I
I
really
at
my
core
love
helping
people,
and
this
is
one
of
the
best
forums
to
do
so.
We've
had
so
many
great
questions
and
great
thought
exercises.
I've
actually
gone
back
and
thought
about
these
things
and
implemented.
I
think
learnings
that
I've
had
just
being
on
the
panel
and
learning
from
some
of
the
questions
that
have
come
in.
So
I
don't
know.
E
Thank
you
david
for
continuing
this
and
many
of
you
remember
george
castro,
who
helped
kick
this
off,
and
I
know
george
personally
as
well.
I
could
go
to
his
house
and
beat
him
up
right
now,
but
no
it's
just
it's
been
fantastic,
and
I
appreciate
the
the
banner
and
all
that
so
thank
you
for
keeping
this
going
so.
A
All
right
awesome.
Well,
thank
you
for
all
your
effort,
mario
okay.
Now
we
can
get
started
all
so
if
you
are
watching
us,
live,
please
jump
into
the
office
hours
channel
on
slack
start
posting
your
questions.
We
will
do
our
best
to
answer
them
as
soon
as
we
can.
We
also
have
discovered
the
kubernetes
discussion
forums
for
some
questions
to
discuss.
While
we
wait
on
your
wonderful
questions
coming
in
so
question,
one
team
we
ready
come
on
some
enthusiasm,
come
on
pop
wait.
Let's
go.
A
D
I'm
sorry
I
was,
I
was
doing
some
social
while
we're
doing
that.
Multitasking
baby,
I
was
also
mario's
speech,
was
so
enthralling
that
I
just
I
needed
some
time.
This
was
good.
It
was
good.
A
All
right,
let's,
let's
start
with
the
first,
the
first
question,
so
we've
got
a
question
here
and
it
says:
reference
a
single
image,
pull
secrets
secret
from
multiple
namespaces.
A
C
Yes,
no,
no,
I
mean
I've
solved
this
problem
in
different,
like
I
guess,
the
way
I've
solved.
This
problem
is
in
general,
when
we
we
used
infrastructure's
code
so
terraform
to
provision
our
cluster
as
part
of
that
we
provision
our
namespaces
as
well
for
all
of
the
niche
pieces
that
you
require,
and
that
also
allows
us
to
control
things
such
as
resource
limits
and
quotas,
and
things
like
that
on
namespace,
so
that
it's
consistent
across
the
cluster
and
as
part
of
that
provisioning.
We
also
provisioned
the
secrets
for
image
poll.
C
So
that's
one
solution
that
I've
seen
now
I
know.
Reading
up
there
is,
I
think,
operators
out
there
that
can
do
this,
and
I
think
I
believe
there
is
electric
manager
has
is
some
sort
of
either
operator
or
I
don't
know
if
it's
a
cli
or
web
hook,
admission
lab
hook
or
something
that
can
help
with
this
as
well.
But
but
the
solution
that
I
guess
we've
used
is
as
part
of
the
provisioning
of
the
clustering
name
species
it.
It
essentially
creates
those
secrets
across
all
of
the
namespaces.
A
E
Yeah
so
that
link-
I
literally
just
found-
and
as
soon
as
you
read
the
question,
I
literally
just
googled
cluster
wide
image,
pull
secrets.
I've
actually
had
this
problem
before,
and
I've
never
really
saw
that
we've
just
kind
of
like
put
them
in
all
name
spaces
like
in
a
for
loop
right,
but
I
think
I
actually
think
that
we
should.
I
don't
know
if
maybe
this
is
something
we
can
do
lower
level
but
like
I
think
there
should
be.
You
know.
E
You've
got
pod
security
policy,
which
is
now
getting
deprecated
and
getting
pulled
out
eventually,
but
like
something
like
that
for
service
accounts,
where
there's
like
this
kind
of
default
set
of
things
that
you
want
to
exist
in
every
name,
space
and
every
news
have
access
to
and
it's
cluster
wide.
Maybe
a
cluster
scoped
object
right,
so
I
I
think,
there's
a
real
use
case
but
yeah.
I
think
these
name
space
objects
like
image.
Full
secrets
is
one:
every
single
environment
I've
ever
spawned.
We've
had
a
need
for
this
sort.
C
C
B
I
I
think
it
was
featured
on
one
of
the
djik
episodes
they
actually
went
through.
The
thing
I
mean
it's
still
fairly
in
in
alpha,
I
would
say,
has.
A
B
Exactly
what
mario
pointed
out
the
ability
to
sort
of
initialize
certain
things
and
then
the
best
part
is
that
you
could
have
different
defaults
for
one
set
of
namespaces
than
the
other,
so
yeah
hierarchical
namespaces.
I
was
actually
about
to
post
that
one.
Let
me
oh
somebody,
has
posted
the
link
in
the
chat
thanks
thanks.
E
Yeah,
because
because
we
like
acronyms
hns,
you'll
see
it
go
by
as
well,
I
think
it
is.
Is
it
alpha
or
beta?
It's
definitely
earlier.
E
Okay,
yeah
we're
actually
starting
to
talk
about
it
for
our
dev
environments,
where
we,
instead
of
saying
like
we
need
a
build
cluster,
we're
actually
trying
to
make
more
intuitive
isolated,
repeatable,
disparate
environments
with
just
using
name
spaces
and
we're
that's
good.
That's
actually
getting
a
lot
easier.
Now.
E
You've
got
projects
like
vcluster
from
loft
that
enable
you
to
run
like
a
3s
cluster,
actually
in
a
namespace,
isolated
namespace,
and
so
you
know
there's
a
lot
more
that
you
can
do
now
and
hns
is
a
national
extension
that
so
yeah
great
question,
and
I'm
very
interested
in
this
use
case.
D
B
A
Node
going
to
do
something
that
we
don't
often
do
in
the
office
hours,
but
like
can
I
drill
into
this
question
a
bit
more
and
talk
about
the
the?
Why,
like,
I
think
we
agree
that
this
is
something
that
we
need,
but
it
also
kind
of
strikes
me
that
the
behavior
may
be
that
they're,
creating
namespaces
manually
rather
than
through
automation,
and
would
this
be
a
problem
that
goes
away
if
they're
automating,
the
creation
of
the
namespace
and
the
provisioning
of
the
secret?
A
E
This
is
a
great
question
david
and
I've
been
thinking
about
this.
I
think
the
problem
is
that
most
organizations
there's
a
line
of
demarcation
where,
like
the
developer,
is
doing
something
and
then
sre
is
doing
something
else
and
the
the
things
they
manage
are
very
different
right.
A
developer
doesn't
actually
care.
All
they
need
is
the
registry.
They
don't
care
about
how
that
registry
is
like
orchestrated,
instrumented
in
relation
to
how
clusters
are
built
and
how
things
are
stored
securely,
whereas
the
sra
team
does
so
s3
teams.
You
know
from
experience.
E
We
are
the
ones
that
are
managing
how
things
are
connecting
and
interoperating
right,
the
developers
you
know
we're
providing
that
that
abstraction
away
from
them
so
that
it
kind
of
all
just
works
by
default
right.
So
so
we're
abstracting
away
the
complexity,
we're
that
that
platform
team-
and
I
think
that,
with
that
reason,
when
you
then
say
okay
developer,
you
can
go
create
as
many
names
as
you
want
do,
whatever
you
need
to
do.
E
Instead
of
having
this
operational
piece
right,
you
know
a
connectivity,
secure
access,
whatever
be
something
they
even
have
to
think
about.
You
know
how
do
we
apply
it
to
the
entirety
of
the
cluster,
so
they
get
it
by
default.
I
think
that's
where
the
question
comes
in,
I
think,
from
automation
and
declarative
models.
Of
course
they
could
be
like
sourcing
libraries
templating
with
helm.
Things
like
that.
I
think
a
lot
of
it
is
like
how
far
how
big
is
the
company?
E
How
far
is
the
sre
team
gone
in,
you
know
providing
these
components
and
these
templating
and
things
like
that
and
a
lot
of
times.
It's
not
as
far
as
you
think.
So.
You
know
there's
like
that
middle
ground,
like
autonomy
and
then
you
know
not
wanting
to
deal
with
add
extra
complexity
to
whatever
an
end
user
might
be
doing.
You
know,
and
I
think
end
user.
It's
like
a
developer
or
some
engineer.
That's
just
trying
to
solve
a
problem.
Let's
remove
infrastructure
worry
from
their
their
cognitive
load.
If
you
will
so.
B
The
other
thing
could
be
also
the
rotation
aspect
of
it.
So
let's
say
you
actually
introduce
an
image,
full
secret
and
it
like
by
policy.
You
need
to
rotate
it,
because
I've
actually
seen
that
once
in
one
of
my
clients,
where
they
needed
to
actually
rotate
the
password
right,
and
especially
if
your
registry
has
like
some
sort
of
integration
into
ed
and
all
that
that
makes
it
compulsory
right,
I
mean.
B
Obviously
you
have
options
of
using
robo
accounts
and
things
like
those
with
some
of
the
registry
products,
but
in
general,
if
you
have
to
rotate
it
and
if
you
have
to
do
it
across
like
multiple
name
spaces
that
could
be
quite
challenging,
it
becomes
a
thing
of
its
own.
Then
you
probably
have
to
set
up
automation.
It
is
expected
that
you
know
each
of
the
namespace
actually
has
that
automation
in
place,
that
that
becomes
challenging.
A
Awesome
follow-ups
from
both
of
you
there
thank
you
yugi
and
mario,
and
do
we
have
anything
else
to
add
to
this
question
or
do
we
are
we?
Are
we
happy
to
move
on
I'll?
Take
that
as
a
move
on
all
right
all
right.
The
next
question
is
pods,
always
deploy
on
the
same
node
hell.
I
have
a
curious
behavior
on
my
jke
original
cluster
with
two
hosts.
A
A
At
the
end,
the
node
is
80
of
the
cpu,
whereas
the
other
node
consumes
only
30.
I
have
checked
all
the
parameters
on
the
second
node
and
everything
appears
to
be
correct.
I
thought
about
restarting
the
control
plane,
but
as
it
is
managed
by
google,
I
cannot
do
that.
Has
anyone
experienced
this
behavior
before
and
what
could
it
be
so
to
summarize
that
they're
trying
to
run
a
workload
on
their
jke
cluster
and
it
has
always
been
scheduled
to
the
same
node
in
a
two
cluster,
a
two
node
setup?
B
So
one
one
thing
that
I
have
actually
seen
is
the
the
node
was
not
ready
the
other
time
the
node
was
actually
dated,
so
just
need
to
check
if
the
node
is
thinking,
so
you
probably
have
to
either
unpaint
the
node
or
you
have
to
like
put
the
tolerance
and
on
the
definition.
A
D
I
I
think
we
need
some
valuable
detail
here
and
that's
like
what
version
this
is.
Maybe
this
is
like
a
version
issue
like
that.
I've
seen
this
where
maybe
they
need
to
like
auto
update
again
gke
has
that
capability
of
doing
that.
That's
that's
first
and
foremost,
and
I
wish
archie
was
here
with
his
background.
So
he
can
help
us
with
this,
because
I'm
sure
he's
probably
seen
this
in
the
wild,
but
that's
kind
of
my
thought
process.
When
I
was
reading
through
that
one.
C
C
E
A
E
Are
completely
abstracted
away
from
here?
I
actually.
This
is
actually
interesting
to
me.
I
might
go
back
and
do
a
little
more
research
because
I'm
interested
on,
if
you
can-
and
maybe
this
is
just
viewing
the
logs
of
cube
scheduler,
which
I
don't
think
you'd
have.
I
don't
know
if
you
can
get
that
from
the
api,
even
in
gke,
but
like
how.
How
did
the
schedule
come
to
that
decision?
E
So
you
know
when,
when
you
go
to
schedule
workload,
obviously
the
scheduler
is
generating
a
cost
value
for
each
node
and
the
the
node
that
wins
gets
the
workload
in
the
the
thinnest
way
to
describe
that,
and
I'm
wondering
if
there's
a
way
that
you
can
dig
into
that
and
see
how
the
decision
was
made.
And
then
you
can
figure
out
the
nuanced
pieces
that
were
being
considered
and
why
the
cost
for
that
node
is
higher
than
the
other
node
right.
E
D
Yeah
yeah,
so
you
can
stream
those
those
logs
into
it.
I
mean
you
can
stream
the
audit
log
capabilities
there,
but
also
like
you
can
stream,
like
anything,
that's
happening
from
the
scheduler.
I
just
I
didn't
understand
this
concept
of
regional
cluster
right.
I
you
know
I
I
know
the
concept
of
like
you
know
is
this:
you
know
a
specific
region
that
might
be
the
issue
as
well.
There's
so
many
factors
here
without
having
the
detail
but
and
again
in
terms.
D
Process,
you
can
actually
go
into
the
logs
and
they
actually
have
a
really
cool
way
for
you
to
like
query
the
logs
for
specific
data
points.
I
think
that
are
better
than
other
other
things
that
are
out
there.
E
Yeah,
that's
really
great
to
hear
I
can
tell
you,
I'm
pretty
confident.
Eks
doesn't
really
have
a
similar
and
that's
the
world
I
live
in.
So
but
that's
that's
awesome
to
hear.
B
B
B
D
But
there's
another
factor
there
too
again,
I'm
looking
at
this,
like
it
might
be
the
specific
machine
type
we
don't
again.
It's
like
this
is
where,
when
you
all
are
submitting
some
questions,
the
more.
If
you
the
more
detail
the
environment
details
for
us
to
put
in
there,
I
mean
there
could
be
a
specific
machine
type,
that's
being
used,
or
maybe
they
have
app
nodes
that
are
larger.
Where
they've
said,
okay,
well,
you
know
we're
going
to
add
another
like.
A
And
push
the
ammo
if
possible,
in
fact
just
prevent
as
much
context
as
you
can.
It
really
helps
us
narrow
down
the
scope
of
the
problem.
Help
us
help
you
all
right.
Thank
you
very
much
everyone.
So
we're
gonna
move
on
to
our
next
question.
A
wonderful
persistent
volume,
persistent
volume
claim
question
so
volume
issue.
When
installing
prometheus
using
a
held
chart,
I
have
a
kubernetes
cluster,
comprised
of
one
control,
plane,
node
and
two
worker
nodes.
The
cluster
was
bootstrapped
using
qbdm.
A
I'm
trying
to
install
prometheus
in
this
cluster,
using
the
helm
chart
as
follows:
helm
install
prometheus
premiere,
facebook
reviews
all
right
cool,
however
they're
running
into
a
problem.
Prometheus
server
and
the
prometheus
alert
manager
pods
remain
pending
following
a
tutorial
they're.
Creating
the
persistent
volumes
manually.
E
A
Gammel
is
in
the
questionnaire
and
the
hackmd
for
anyone
following
along.
I
will
not
read
that
lane
by
line,
however,
in
the
end,
when
they
run
kid
control
get
pv,
and
it
seems
that
the
prometheus
server
is
claiming
the
alert
manager
volume.
How
do
I
solve
this
problem?
So
let
me
try
and
summarize
that,
like
oh,
we
lost
mario
all
right,
so
it
would
appear
that
this
person
is
creating
the
persistent
volumes
manually.
E
A
Then
the
helm
chart
is
creating
the
claims
and
the
claims
are
grabbing
the
first
volume
that
they
can
and
I
don't
think
off
the
top
of
my
head.
There's
a
way
to
be
selective
with
that.
Maybe
oh,
hey
man
goes
back.
Maybe
we
can
talk
about
how?
How
does
that
happen?
Does
anyone
know
how
that
happens?
When
you
create
a
persistent
volume
claim,
it
checks
the
available
pvs?
How
do
you
make
sure
it
collects
the
right
one.
E
So
doesn't
it
find
one
that
meets
all
the
requirements
of
the
pvc?
Isn't
that
the
first
decision
and
then
I'm
guessing
there's
some
sort
of
pinning
you
can
do?
I
actually
don't
know
very
much
about
ppcs,
but
in
my
light
knowledge
you've
got
a
storage
class,
a
pv
and
then
a
pvc
and
the
pvc
is
just
considering
what
pbs
are
there
that
can
be
used
to
satisfy
the
requirement,
but
I
think
I
I've
got
to
believe
there's
something
that
you
can
say.
I
actually
wanna
affinity
like.
I
want
this
pv.
E
I
want
whatever
this
storage
back
end
is
right,
but
I
I
don't
know
that's
a
great
question.
A
A
Okay,
so
let
me
look
into
that
because
you're
suggesting
that,
because
they're
creating
the
pvs
manually-
which
I
don't
think,
is
something
really
any
of
us
would
normally
do
right.
We
would
require
the
storage
class
to
create
the
persistent
volume
based
on
the
csi
driver
and
then
that
would
connect
all
the
dots
for
us.
B
You
you
can
you
can
create
a
pv
that
that's
fine,
you
can
create
a
pv.
You
can
reference
it
in
your
in
your
in
your
deployment.
In
your
definition.
You
could
reference
it,
but
I
I
think
the
challenge
here
is
that
the
pvc
that
prometheus
is
asking
for
it
must
be
asking
for
some
storage
class,
and
this
this
particular
pv
does
not
have
a
storage
class.
Well.
A
It's
using
the
default
one,
which
is
what
happens
when
you
omit
the
value
on
the
storage
class
name.
So
both
of
these
persistent
volumes
are
used
in
the
default
storage
classes
in
the
cluster
and
then
the
problem
is
the
claim
is
grabbing
the
wrong
volume
and
I
believe
that's
because
of
what
mario
said
and
there's
no
way
to
pen
the
volume
and
the
claim,
together
until
they've
been
gone.
D
By
the
node
affinity
right,
so
it's
going
to
do
it
by
like
what's
what
it's
been
assigned
like,
so
I
don't
think
you
can
do
that
without
using,
like
you
said,
some
type
of
you
know
some
other
external
kind
of
tool
like
casting
or
some
one
of
those
other
other
tools
to
be
able
to.
Like
really
do
that,
I
think
I
don't
know
I've.
I've
never
seen
that
in,
like,
like
you
said,
I've
never
went
to
that
degree
and
done
that
I
just
always
trusted
the
pb
and
the
pvc
to
do
the
work.
For
me.
C
So
I'm
just
looking
at
the
spec
now
like
a
pvc,
I
there
is
a
volume
name
in
the
spec
which
you
can
specify
the
pv.
So
I
don't
see
the
pvc
definitions
here,
but
I
believe
the
solution
here
is
really
manually,
create
the
pvc
and
map
the
pvc
to
the
specific
volume,
but
to
the
pv
using
the
volume
name
in
the
spec,
and
then
you
can
map
you
can
use
that
specific
pvc
in
the
correct,
correct
component
for
for
prometheus.
I
think
that
it's
a
solution.
A
A
B
The
output,
it
was
also
missing,
even
the
default
storage
class.
It
is
just
saying,
like
empty
right
in
the
outputs
in
the
heck
empty,
so
I
believe,
even
just
creating
a
default
storage
class.
Most
of
the
community
held
charts
just
like
manages
all
the
pvc
pv
and
pvc
creation.
So
if
there
was
like
a
default
storage
class
defined
which
satisfies
a
host
path
mount,
it
should
just
get
all
those
things
figured
out
automatically
rather
than
trying
to
like
create
all
those
stuff
manually.
A
All
right,
awesome:
okay,
we
have
one
question
live
in
the
officer's
channel,
which
I'm
going
to
remove
myself
from
answering,
because
I
know
nothing
about
embedded
edge
but
I'll,
throw
it
out
there
to
people
anyway.
So
do
you
see
cloud
native
supporting
the
embedded
edge
development
world
in
future
days,
anyone
close
to
embedded
or
edge.
B
B
I
mean
I,
I
definitely
hear
enough
about
it
because,
like
if
you
see
right
now,
it's
pandemic,
obviously
not
not
many.
I
I
work
with
a
lot
of
banks
and
not
many
banks
have
branches
open
at
the
moment
right
almost
everywhere
in
the
world,
so
but
those
kind
of
use
cases
right.
They
they
call
them.
Robo
remote
office
branch
office,
kind
of
use
cases
embedded
not
so
much
but
edge.
Definitely
so
those
kind
of
logistics
companies
they
they
are
they're
using
it
today
edge
computing
devices
right.
B
So
definitely
there
is.
There
is
going
to
be
more
and
more
things
happening
in
the
edge
side.
I
I
see,
like
various
vendors,
are
actually
doing
a
lot
of
work
on
the
edge
side.
I
thought
there
was
some
sort
of
sig
or
maybe
community
post,
also
a
while
ago,
around
the
whole
edge.
A
E
I
was
just
gonna
say
you
know
when
I
signed
up
for
keep
con.
You
know
the
amount
of
day
zero
events
that
we
have
now
is
astounding.
I
mean
it's
ridiculous
and
I
think
that
lends
to
the
amount
of
interest
around
many
different
tiers
and
scopes
of
applying
cloud
native
in
so
many
different
layered
ways.
If
you
will
so
when
you
say
you
know,
is
this
going
to
get
bigger,
absolutely
100
right
like
if
I
could
invest
in
a
cncf
cryptocurrency?
I
would
I'd
I'd
be
like
all
in.
E
Actually,
I
think
that
the
because
the
community,
because
of
the
growth
because
of
the
kubernetes
project
at
its
core
and
then
the
project
like
q
edge
and
k3s
like
I
think
it
is
bound
to
be
kind
of
at
least
if
you're
a
company
solving
a
problem
cloud
native,
is
going
to
be
a
big
part
of
the
way
you
think
about
solving
that
problem
right.
It's
going
to
be,
at
least
in
your
mental
mindset
like
an
option,
so
some
road
you
can
go
down.
E
So
I
would
say
yes,
I
would
say,
keep
keep
in
tune
to
kind
of
what
the
cncf
is
working
on
the
landscape,
and
you
know,
of
course,
other
applications
as
well.
It
looks
like
this
article
on
the
cncf
blog
is
from
huawei
and
how
they're
using
cube
edge.
I
think
so
we're
seeing
different
companies
start
to
come
into
the
landscape
and
say
like
look
at
this
is
how
we're
doing
it
in
sharing
and
then
you
know,
collaborating
as
well.
So
I
yeah
very
interested
in
this.
E
A
Awesome,
thank
you
for
that.
So
thank
you
for
your
question
energy
and
we
hope
that
helps
check
out
cube
edge,
there's
also
a
link
to
kcs
there
and
lots
of
stuff
happening
at
kubecon.
Although
I
feel
like,
we
now
need
to
cancel
office
errors,
and
this
is
not
a
planning
session
for
cubic
coin.
So
someone
go
by
the
domain,
let's
get
the
the
org
set
up
and
github
and
then
it's
a
real
project.
A
D
Not
talk
about
product,
okay,
so
go
ahead,
I
mean
look,
there's
there's
a
lot
of
things.
You
can
do
with
prom
ql
right
to
be
able
to
merge
a
lot
of
these
these
metrics
out
and
all
of
that
so
I
would
definitely
you
know,
take
a
look
at.
You
know
that
as
a
capability
right
just
you
know,
there's
there's
a
lot
of
projects
too.
There's
things
with
like
you
know,
I
think,
there's
lens,
which
is
the
not
the
ide
lens,
but
there's
like
you
know,
prometheus
allows
you
to
you
know.
D
Take
this
but,
like
I
think,
there's
ways
you
can
take.
You
know
exported
data
from
multiple
places
and
use
pomql
to
be
able
to
join
it
and
there's
a
lot
of
examples
out
there.
If
you
just
you
know,
you
know,
google
that
so
yeah
I
don't.
I
don't
want
to
kind
of
embellish
the
more
detail
there,
but
that's
kind
of
what
I'm
what
I'm
thinking.
B
I
I
think
the
the
one
one
part
of
that
question
is
around
the
exporter,
so
I
would
basically
use
node
exporter
for
that.
I
would
actually
run
a
good
exporter
on
the
ec2
instance,
so
that
that
would
actually
expose
an
endpoint
and
configure
the
endpoint
in
the
prometheus
to
basically
script
that
easy
to
enjoy.
A
D
And
again
I
don't
want
to
show
product,
but
you
can
join
all
of
this
data
together
using
cystic,
but
you
know
what
I
mean,
because
we
literally
have
you
know
have
done
this
because
I've
seen
this
at
multiple
places
where
you're
taking
these
data
points
right
and
you're
trying
to
you
know
again
having
a
one
single
dashboard,
it's
real.
You
know
you
can
kind
of
manage
that,
but
then
you
have
to
manage
a
back
end
and
all
that
fun
stuff
I'll
stop.
C
Yeah,
I
guess
just
kind
of
thinking
from
aws
perspective,
all
of
these
metrics
and
vlogs
end
up
in
cloud
watch.
So
I
I
wonder
if
there's
a
way
to
I
know
grafana,
if
you're
using
rafana
for
your
dashboards,
you
can
probably
then
integrate
that
with
cloudwatch
directly.
So
perhaps
there's
no
need
to
even
do
this
integration
to
prometheus.
I
wonder
if
that's
just
the
way
to
skip
that
problem,
and
I
there's
also
cloudwatch-
has
cloudwatch
streams.
C
B
It
in
channel
yeah-
I
I
also
went
and
just
checked
the
service
discovery
definition
in
prometheus,
there's,
actually
an
ec2
service
discovery
in
prometheus.
B
A
Nice
great
suggestions
there
from
across
the
panel
anything
else
before
we
move
on
no
got
it
all
right.
Next
question:
when
is
container
statuses,
set
hello.
Everyone
we've
observed
that
at
some
point
in
the
creation
of
a
new
pod,
the
field
spec
status,
container
statuses
is
not
set.
Is
that
something
that
is
expected?
We
are
thinking
that
the
own.
We
are
thinking
that
this
is
only
true
for
a
small
amount
of
time,
but
it
is
caused
by
a
bug
is
causing
a
bug
in
the
library
we
are
using.
C
Yeah,
I
don't
know
much
about
how
those
latest
things
are
set
at
all
and
then
I
know
there's
some
work
around
trying
to
identify
like
consolidate
the
status
meanings
so
that
they're
kind
of
consistent
because
like
they
might
use
different
stats
just
for
different
resources.
That
might
mean
different
things.
But
I
have
no
idea
how
they're
set
at
all
yeah.
A
B
Yeah,
I'm
just
looking
at
the
bot
lifecycle
so
in
the
pod
life
cycle.
Very
specifically,
there's
a
container
states
section
right
and
it
only
describes
waiting
running
terminated.
So
I
believe
it
would
be
some
time
before
it
even
gets
right
after
getting
scheduled
on
a
on
a
node
and
before
it
is
picked
up
for
actual
execution.
B
That's
that's
probably
the
time,
but
I'm
just
curious.
When
does
the
library
actually
kick
in
it's
like?
Is
it
some
sort
of
operator
which
is
looking
at
the
api
events,
or
is
it
like
just
another
container
on
the
on
the
pod.
A
Yeah,
as
I
think,
you're
right,
it's
hard
to
know
with
there's
not
a
lot
of
context
in
this
question,
but
it
sounds
like
a
bug
in
the
library
right
like
it
should
be.
If
this
container
status
isn't
there,
I
should
probably
just
try
again
in
a
few
hundred
milliseconds
or
microseconds
or
whatever,
and
just
ignore
it
and
try
again,
I'm
assuming
there's.
Definitely
at
some
point
when
the
pod
sandbox
is
created
before
the
container
stays
exist
and
the
library
should
just
handle
that.
I
would
hope.
E
Maybe
it
was,
I
was
gonna
sorry
I
was
gonna
have
to
actually
I
was
gonna
say
exactly
what
david
was
gonna
say.
I
think
that
there's
like
some
some
small
tiny
period
there
where
the
pod
is
being
created,
but
the
containers
are
not
yet
created,
and
I
actually
go
back
to
the
original
kind
of
like
premise
for
this
library
and
what
they're
trying
to
achieve
is:
why
are
you
checking
the
container
status
instead
of
the
actual
pod
status?
E
A
E
Like
what
what
the
workload
being
ready
to
service
requests
right-
and
you
know
the
many
containers
could
be
in
that
pod,
so
I'm
interested
in
that
use
case,
but
I
would
say
yeah
if
that
key
doesn't
exist.
Well,
the
containers
probably
aren't
in
existence,
although
I'd
be
interested
to
learn
about
like
more
about
that
feel.
If
you
deep
dive
into
the
the
api
and
status,
you
know
if
that
field
is
actually
a
reliable
field,
because
maybe
in
a
next
release,
it's
not
used
in
the
same
way
as
well.
B
A
B
D
B
Sorry
yeah,
I'm
just
trying
to
find
a
cube
con
talk
that
kind
of
explained
this
issue
like
there
was
a
talk
where
they
had
like
an
issue
with
the
status
being
set
late
or
something
it's
probably
like
very
old
talk.
I
couldn't
find
it
I'm
trying
to
find
it,
but
it
talked
about
this
exact
specific
scenario,
but
I
can't
kind
of
I'll
try
if
I
find
it
I'll,
just
put
it
in
the
chat.
A
E
A
The
from
fail
flag
on
creating
a
config
map,
so
I
am
currently
getting
into
kubernetes
when
creating
config
maps,
is
it
possible
to
create
a
config
map
with
key
value
pairs
from
inside
to
fail?
So
there's
some
yaml
example
here
in
the
hackmd.
Please
check
that
out,
but
essentially
they
have
a
gamma
file
and
they
want
to
create
a
config
map
using
the
keys
and
the
yaml
file
as
keys
in
the
conflict
map.
A
B
I
yeah,
I
I
think
I
I
think
I
understand
what
they
are
trying
to
do
it's.
They
are
basically
taking
the
content
of
a
yaml
file
and
they
want
that.
The
keys
in
the
config
map
to
be
the
keys
from
the
yamaha
pilot
that
that
took
that
looks
like
some
text.
Manipulation
would
help.
B
E
I
don't
think
from
file
works.
That
way,
I
think,
from
file
by
default,
like
the
file
name
is
going
to
be
the
key
in
every
scenario
right,
because
a
config
map,
like
it's
kind
of
just
like
you,
have
a
file
with
config
inside
of
it
and
that's
just
kind
of
a
standard
text,
file
sort
of
thing,
and
so
that
is
the
value
and
the
key
is
the
file
you've
passed
in.
E
I
don't,
however,
I
don't
know
how
to
do
it
like
I'm,
I'm
actually
interested
in
that,
and
even
if
or
is
that,
an
anti-pattern,
that's
not
actually
intended
right,
like
you're,
not
really
intended
to
control
the
keys
there,
necessarily
for
whatever
reason,
so
I'm
interested
or
maybe
you
can't
do
it
on
the
command
line.
It's
something
you
have
to.
You
know
write
right
out
in
the
spec,
so
I
would
love
to
know
that.
B
B
Something
I
I
paused
to
actually
listen
to
you.
Oh
sorry,
yeah.
I
think
the
other
option
is
like
something
like
yogi
mentioned
right,
like
some
kind
of
like
scripting
or
something
to
do
it
manually
or
yeah,
generate
the
like
append
the
content
of
the
file
to
the
key
inside,
using
something
like
jk
or
yq
like
like.
D
D
E
So
config
cube
ccl,
create
configmap,
has
two
other
options.
It
might
be,
except
possibly
so
there's
from
literal,
which
is
you're,
providing
it
on
the
command
line
and
you
can
specify
explicitly
the
key
and
then
the
value
and
there's
from
m-file
like
like
pop
is
talking
about
which
I
wonder,
if
you
did
it
from
an
m-file,
if
it
would
take
your
key
value
pairs,
I
I
could
be
wrong
because
it's
but
I'd
assume
it's
considering
like
an
environment
variables.
E
A
Yes,
I
think
that
from
nvel
is
a
good
approach.
You
would
have
to
change
the
contents
to
be
what
you
know.
Posix
would
expect
or
you'll
be
other
answers
as
well,
which
is
just
to
you
know.
You
have
to
gamble
just
just
wrap
it
in
a
config
map,
spec
and
it'll
work.
Yes,
I
think
they're
both
great
solutions,
and
hopefully
that
helps
a
lot.
A
Okay.
This
is
our
last
question.
A
D
A
Okay,
so
let
me
let
me
just
write.
This
is
quite
a
large
question
of
the
hacking
days.
I'm
going
to
kind
of
summarize
this
based
on
what
I
can
see
skimming
here.
They
have
a
docker
file
and
then
that
docker
file
they
seem
to
be
exposing
or
right
copying
files
into
a
directory.
However,
they're,
then
using
volume
mounts
and
their
deployment
channel
to
mount
something
from
the
some
volume
host
path
or
config
map
or
whatever
into
the
container,
and
they
want
to
know
what
what
takes
precedence
there.
B
Yeah
sorry
guard
go
ahead,
go
ahead,
I'm
still
reading
it
all
right.
Go.
B
Right
solution
to
be
not
to
overwrite,
whatever
the
content
that
was
already
in
the
docker
file
was
to
use
the
subpath
option
when
you're
doing
a
volume
mode.
That's
what
I
do
to
actually
add
an
extra
file,
but
if
it's
a
folder
I
think
it
would
always
get
over
it
and
unless
you
specify
all
the
content
of
sub
but
manually,
I
haven't
tried
that
scenario.
So
I
could
be
wrong
about
that.
B
But
if
you
just
want
to
like
add
a
single
file
to
that
volume,
you
can
use
the
subpath
option
in
volume,
mods
you're,
100
right.
I
have
actually
been
burnt
by
that.
So
I
know
yeah.
So
one
thing
that
I
I
learned
even
like,
I
didn't
even
learn
it
during
my
ckad
time
was
when
you
mount
something
on
an
existing
volume,
and
this
was
not
even
like
a
volume.
It
was
more
of
a
config
map
field
that
I
wanted
to
mount
as
a
file
inside
the
container
yeah.
B
You
can't
do
partial
mount,
so
it's
like.
If
you
have
an
existing
directory,
it
will
be
just
over
like
it
will
be
shadowed,
it's
not
overwritten,
but
it
will
be
shadowed
by
the
volume
mount
whatever
that
volume
mount
is.
So
I
absolutely
agree
with
your
solution
around
having
a
sub
directory
now.
Typically,
for
example,
if
you
the
the
use
case
that
I
had
was
putting
some
sort
of
edition
and
then
and
then
it
was
it
an
init,
or
it
was
one
of
those
processes
where
I
had
to
add
something.
B
Sorry
nginx
configuration
so
nginx
configuration.
I
wanted
to
add
a
virtual
host,
but
I
didn't
want
to
overwrite
the
existing
configuration
so
because
it
has
the
cont.d
kind
of
folder.
B
I
mount
stuff
in
on
that,
instead
of
into
this
slash
sed,
so
something
like
that
is
better
because
it
will
be
it'll
be
shadowed
when
you
mount
it.
Whatever
was
actually
there
on
the
container
layer
in
the
over
file
system,
it
will
be
shadow.
A
Awesome
thank
you
for
that
all
right,
so
we
are
all
out
of
questions,
so
I
think
we'll
just
wrap
this
up.
Anyone
anything
else.
No,
all
right!
Okay,
cool
got
it!
Okay,
so
thank
you
all
for
joining
us
today
for
today's
kubernetes
officers.
Thank
you
to
everyone
who
asked
a
question
on
the
kubernetes
discussion
forums
or
live,
and
the
slack
channel.
A
Thank
you
to
the
following
companies
for
supporting
the
community
with
developer
volunteers,
google,
vmware,
cartax,
equinix,
metal
and
systick,
lastly
feel
free
to
hang
out
in
the
officer's
chat
channel
afterwards.
If
the
other
channels
are
too
busy
for
you
and
you're.
Looking
for
a
friendly
home,
then
you're
more
than
welcome
to
build
up
a
chair
and
hang
out,
we
will
be
back
at
the
same
time
next
month
until
then
have
a
great
month
and
thank
you
all
bye.