►
Description
Office Hours is a live stream where we interview one of the Kubernetes SIGs and answer live questions about Kubernetes from users on the YouTube channel. Office hours are regularly scheduled meetings where people can bring topics to discuss with the greater community. They are great for answering questions, getting feedback on your use of Kubernetes, or just passively learning by following along.
A
See
more
folks
are
joining
they
put
in
the
snack.
A
So
let
me
do
the
intro
and
welcome
to
the
office
hours
kubernetes
office
hours
today
is
june
15th,
and
we
have
a
couple
of
folks
work
on
everyone.
We
are
here
to
introduce
kubernetes
sig
in
a
moment,
we'll
let
you
know
which
one
it
is
answer.
Your
questions
live
on
the
air
and
it's
the
with
experts
that
we
we
get
I'm
volunteers.
So
these
are
volunteers
and
check
the
topic
in
the
url.
Before
we
start
there
are
the
ground
rules.
We
have
a
kubernetes.
A
Is
this
a
kubernetes
event
from
contribute
x?
We
follow
the
code
of
conduct,
it's
an
effect.
Please
be
excellent
to
each
other.
This
is
a
judgment-free
zone.
Everyone
had
to
start
somewhere.
So
please
help
out
your
buddy
by
having
supported
environment
in
the
channel.
The
channel
is
office
hours.
While
we
do
our
best
to
answer
your
questions,
the
panel
doesn't
have
access
to
your
cluster.
A
We
cannot
do
live
debugging.
We
love
to
do
that,
but
we
can
do
it
afterwards.
Panelists
you're
encouraged
to
expand
your
answers
with
your
own
experience.
So
a
lot
of
people
that
we
invite
and
volunteer
have
actually
experienced
at
work
with
korean
entities
working
in
production
audience
you
can
help
by
pasting
urls
in
the
channel.
Keep
posting
questions
in
discuss
the
kubernetes
dot
io,
the
ones
that
we're
going
to
discuss
was
a
couple
that
I
found
recently
that
didn't
have
an
answer.
A
So
usually
that's
what
I
do
you
can
help
out
by
tweeting
and
the
panel
is
made
of
entire
volunteers.
So
with
that,
I
think
we
can
do
a
few
introductions
very
quickly.
I
will
go
around
so
let
me
add
a
couple
of
folks
that
joined
didn't
dennis
joint.
I
think
I
saw
somebody
else
joining.
A
Arch
r
key
so
I'll
go
first,
my
name
is
carlos
santana.
I
help
I
work
for
ibm
as
a
architect,
but
in
kubernetes
I'm
right
now
contributing
to
sig
release.
I'm
the
lead
for
release,
notes
and
sigrius
has
been
my
home
for
for
the
last
few
months,
so
I'm
there
so
pick
me
in
slack
so
go
to
the
next
person
deems.
B
Hi,
my
nickname
is
dims.
Usually
you
can
find
me
in
places
like
sig
architecture,
sig,
node
and
a
few
other
places
code
organization
is
another
channel
slack
channel,
so
I've
been
doing
kubernetes
for
a
few
years
now
and
it's
been
a
blast.
You
know
just
meeting
people
here
and
having
fun
doing
things
together
for
sure.
So,
looking
forward
to
the
you
know,
the
panel
today.
C
Yeah,
thank
you
chris
hi,
I'm
chris
priveteer,
I'm
a
software
engineer
on
the
developer.
Relations
team
over
at
equinix
and
I've
been
doing
kubernetes
for
a
while,
but
latest
contributions
are
in
kind
of
the
cluster
api
provider,
areas
for
the
equinix
metal
stuff,
so
excited
to
help
you
guys
out
today
and
it's
my
birthday.
E
Two
days
ago,
yeah
good
happy
birthday.
Chris
hi,
my
name
is
yogi.
I
am
a
principal
solutions
engineer
at
yoga,
pipe
I'm
based
in
singapore,
and
I
have
been
working
with
kubernetes
for
close
to
four
years
now.
I
guess-
and
I
have
a
variety
of
experience,
setting
up
things
for
customers
and
partners
and
all,
and
mainly
like
my
contribution-
is
that
I
have
been
doing
community
work
within
singapore
and
in
the
region
for
kubernetes.
A
Very
good
arkey
you
were
able
to
join
us,
quick
introduction.
D
Sure
hi,
my
name
is
archie,
I'm
a
customer
engineer
at
google
cloud
these
days
and
I'm
cncf
ambassador
based
in
montreal,
canada,
I'm
organizing
kubernetes
and
cncf
meetups
across
country
and
teaching
kubernetes
at
universities
so
happy
to
be
in
this
amazing
panel
and
you
know
see
the
dims
with
us
today.
It's
amazing
so
looking
forward
for
the
show.
A
F
Yes,
I'm
here
I
am
an
operator
of
kubernetes,
I
say
my
first
time.
I
tried
it
was
six
years
ago
and
currently
I
work
on
a
team
that
deploys
kubernetes
clusters
around
eks,
so.
A
Very
good,
so
with
that,
what
we're
doing
with
kubernetes
office
hours
is,
in
addition
to
pick
up
a
few
questions
from
discuss,
we
tried
to
bring
someone
from
every
sig.
David
rocco
started
that
that
and
I
continued
that
as
the
host.
I
do
it
as
a
selfish
reason.
A
That's
a
way
I
can
network
and
and
learn
about
all
these
things
and
and
which
one
to
to
join
next,
because
I'm
I
want
to
contribute
more
and
then
learning
about
what
they
do,
because
there's
a
many
seeks
and
I
keep
learning
and
meeting
people
so
keep
going.
I
I
met
a
lot
of
people
so
for
today
we
are
going
to
talk
about
six
identified,
pronunciation,
cakes,
architecture,
kubernetes
infrastructure,
the
infra
part
and
we'll
we
have
here,
I
think
dbr.
They
share
one
of
the
shares.
A
What
do
they
do
and
then
see
if
you
will
be
interested
in
joining
their
meetings
and
learning
what
they
do
and
then
become
a
contributor
and
member
who
knows
a
few
months,
you're
all
a
lead
and
a
share
or
leading
one
of
the
one
of
the
efforts
in
there
deem?
If
you
don't
have
anything
to
share,
I
can
share
the
the
website,
so
people
can
and
see
where
they
are,
as
you
as
you
speak,.
A
A
B
Correct
so
where
did
we
even
start
right
like?
What
does
this
working
group
do,
and
why
is
it?
A
working
group
might
be
some
of
the
questions
that
you
have
in
your
minds.
So
the
way
to
think
about
this
is
when
kubernetes
got
started.
B
There
was
a
lot
of
infrastructure
that
was
set
up
on
the
google
cloud,
including
things
like
the
ci
jobs
that
run
pre-submit
jobs,
as
well
as
the
periodic
jobs
that
ensure
that
your
pr
is
green
and
it
doesn't
break
anything,
and
so
almost
all
the
kinds
of
things
that
you
need,
including
downloads
of
the
devs
and
the
rpms
for
kubernetes
downloading
the
container
images
all
of
this
was
in
google
cloud
and
it
was
maintained
by
you
know,
folks
from
google,
and
they
were
paying
for
it
as
well.
B
So,
over
a
period
of
time,
we
realized
that
the
community
wanted
to
have
a
say
in
how
things
are
run,
how
things
are
set
up,
and
google
also
wanted
to
like
hey
how?
How
do
we
reduce
the
burden
on
google
engineers
and
actually
count
how
much
we
are
paying
for
they
are
paying
for
it,
and
you
know
we
are
using
it
etc,
site
right.
B
So
what
ended
up
doing
was
we
started
a
working
group
to
look
at
these
issues
and
see
how
we
can
move
certain
things
out
from
google
engineer's
hands
so
and
the
community
will
then
have
a
say
in
how
how
things
happen.
So
what
we
ended
up
doing
was
we.
We
went
to
cncf
and
we
said
cncf
we
want
to
do
this.
B
We
want
to
make
sure
that
people
in
the
community
are
able
to
do
things
like
you
know,
make
a
release
for
the
longest
time
kubernetes
release
was
guided,
was
handled
by
folks
from
google
as
well.
So
there
was
a
lot
of
things
that
we
were
not
able
to
do
as
a
community
that
we
wanted
to.
You
know
do
so.
We
said:
okay,
let's
create
a
working
group
here
and
we'll
set
up
some
infrastructure.
B
Maybe
we'll
start
on
the
google
cloud
itself,
but
the
idea
would
be
to
go
from
google
cloud
into
amazon
and
azure
and
other
clouds
as
well
over
a
period
of
time.
So
we
set
up
a
bunch
of
infrastructure
in
the
google
cloud
and
then
we
are
slowly
moving
things
out
from
from
the
existing
projects
gcp
projects
where
they
are
running
into
the
projects
owned
by
cncf
as
a
result.
What
do
we
get
out
of
it?
Is
we
know
what
applications
are
running?
We
we
make
sure
that
we
have
the
source
code
for
all
the
applications.
B
We
are
able
to
build
newer
versions
of
things,
and
we
are.
We
know
what
scripts
are
being
used
to
run.
We
know
that
we
can
replicate
the
builds
for
the
different
applications
without
having
to
depend
on
somebody's
desktop
deep
inside
google,
for
example
right
and
when
we
have
new
people
coming
to
the
community.
They
are
able
to
see
and
touch
and
feel
and
make
changes
to
the
things
that
we
are
trying
to
do.
B
We
are
also
enabling
sigs
to
have
their
own
staging
repositories
for
container
images
and
whatever
else
that
they
need
as
well.
So
if
you
go
through
the
different
items
in
the
blog
post,
you
will
see
that
we
have
been
able
to
move
dns
records.
For
example,
say
cluster
api
cluster
api
has
a
documentation
website
and
there
is
a
dns
for
it,
and
that
is
managed
by
the
kate
central
team.
B
If
you
go
to
sig
release
and
they
are
cutting
releases
on
a
regular
basis,
where
do
they
store
the
images
that
is
a
spot
that
was
created
by
you
know
our
team
and
most
of
the
stuff
that
we
have
are
either
in
bash
scripts
or
terraform?
Or
you
know,
a
mix
of
both
another
example
would
be.
Typically,
you
will
see
that
you
know
the
day
after
a
release
happens,
you
you
can
you?
Can
you
can
start
using
client
go,
for
example.
B
Right
way
does
are
publishing
bot
that
creates
these
tags
for
different
repositories.
That's
a
publishing
bot
that
runs
within
humanities,
infra
itself.
So
what
we're
trying
to
do
here
is
like
we
want
to
do
a
bunch
of
things.
There's
a
lot
more
thing.
We've
done
a
bunch
of
things,
but
there's
a
lot
more
things
in
the
pipeline
that
we
need
to
do.
B
B
For
example,
if
you
want
to
create
a
new
bucket
for
in
gcr,
then
you
go
edit
the
cml
file,
and
then
you
know
you
push
this
button
to
to
run
essentially
the
scripts
that
are
going
to
create
the
staging
repository
that
was
newly
added
to
the
yaml
file.
So
we
think
in
those
terms,
how
to
make
it
easy
for
people
to
request
the
things
that
they
need
and
how
do
we
provision
those
things
with
the
least
amount
of
effort
from
our
side?
B
So
we
need
people
who
are
good
at
doing
the
devops
stuff.
We
want
people
who
are
able
to
think
a
little
bit
ahead
as
well
as
so
that
we
can
figure
out
like
hey.
What
is
the
biggest
problem
today
that
we
have,
which
is
we
are
running
out
of
money,
because
we
have
so
much
infrastructure
that
we
already
got
from
the
google
site
to
our
site
that
we
are
hitting
our
limits.
For
example,
we
are
spending
a
lot
of
money
in
downloading
binaries
in
downloading
container
images.
B
So
then
we
took
that
as
a
problem
and
said:
how
do
we
reduce
the
cost?
How
do
we
spread
the
load
to
other
cloud
providers?
So
we
want
people
who
are
able
to
think
in
those
terms
which
are
forward-looking
and
kind
of
like
get
to
the
point
where
we
have
solutions
ready
to
scale
up
ready
to
actually
scale
up
to
what
the
the
whole
community
wants
us
to.
Do.
B
It's
a
lot
of
interesting
problems
and
you
get
into
all
kinds
of
corners
in
different
cloud
systems
as
well,
and
it
is
definitely
a
fun
place
to
be
in
because
you
are
helping
your
fellow
people
and
do
the
things
that
they
would
like
to
do.
Hopefully
that
helps
to
give
an
overview
of
the
things
that
we
are
trying
to
do
here.
B
If
you
have
any
questions,
we
have
a
slack
channel,
you
can
hit
hit
me
up
on
the
kubernetes
slack
or
the
cnc
slack,
so
yeah
there's
a
mailing
list.
The
whole
shebang
we
have
bi-weekly
calls
as
well
for
sure.
So,
if
you're
interested
in
this
kind
of
work,
where
you
are
able
to
help
other
people
in
the
kubernetes
community,
please
come
and
talk
to
us.
We
would
love
to
have
you.
A
So
the
these
am,
I
muted,
I'm
not.
Can
you
hear
me
yep
these
things
so
a
couple,
a
couple
of
questions:
do
you
do
your
sig
is
like
a
provider
to
other
seeks
of
infrastructure,
for
example,
for
people
that
are
not
familiar,
how
we're
producing
kubernetes.
So
when
somebody
submits
a
change
to
the
cube
api
server
right,
they
change
one
line
of
go,
and
hopefully
they
write
a
test
case
as
a
best
practice.
We
told
them.
A
The
test
that's
been
something
that
is
in
the
radar
lately
right
the
quality
improvements,
but
what
happens
there
when
a
pr
pr
gets
created
those
kubernetes
gets
deployed
in
gke?
I
don't
think
so,
or
the
a
new
current
disclosure,
I
guess,
deploying
vms
or
small,
one
like
somebody
needs
to
run
pay
for
that
infrastructure
and
also
the
automation
of
it.
So
there's
a
delineation
between
testing
on
your
sig.
B
B
Yeah
there's
a
lot
of
overlap
between
the
different
things
like
I'll
take
one
example
right
say
you
are
making
a
change
in
kubernetes,
kubernetes
repository
and
you're
updating
a
go
file
there.
What
ends
up
happening
is
there
are.
There
is
a
definition
for
a
number
of
pre-submit
ci
jobs
that
run
to
give
a
green
signal
on
it.
B
Now
these
things,
dca
jobs,
are
launched
on
what
we
call
pro
and
pro
is
running
in
a
gcp
cluster
right
typically,
pro
runs
inside
a
a
g
key
node
as
a
pod,
and
then,
if
it
needs
to
stand
up
a
full
kubernetes,
then
it
gets
one
gcp
project
where,
where
it
can
do
that
work
and
essentially
starts
a
local
copy
of
kubernetes,
you
know
most
of
the
jobs.
Don't
need
to
do
that
because
all
they
are
doing
is
like
run.
Verify
checks
on
like
hey
is
the
go.
B
Formatting,
fine,
or
you
know,
is
something
is
a
spelling
check
is
another
example,
and
there
is
a
bunch
of
other
verify
scripts
as
well,
but
there
are
some
verify
some
scripts
that
actually
need
a
full
kubernetes
cluster
to
actually
try
the
test
cases
and
make
sure
that
the
the
change
that
you
introduced
doesn't
fail.
So
essentially,
what
I'm
trying
to
say
here
is
like
most
of
the
things
are
automated
the
ca.
B
A
I
I
see
and
and
you're
doing,
automation
also
for
so
in
your
realm,
would
be
automations
to
define
like
a
as
much
as
possible
do
infrastructure
as
you
go
like
declarative
declare
like
you,
want
three
buckets.
We
want
three
gcp
projects.
We
want.
D
A
B
Absolutely
that
is
definitely
a
great
examples
of
the
automation
that
we
do
another
one,
which
is
probably
a
little
bit
more.
B
What
other
people
have
ended
up
using
is
creating
mailing
lists
and
adding
people
to
mailing
lists
right
like,
for
example,
you
know
we,
we
have
automation
that
creates
mailing
lists
under
kubernetes
dot
io.
So
in
fact,
earlier
we
used
to
have
kubernetes
dash
dev
at
google
groups
whatever
right
now
it
is
dev
at
kubernetes,
dot,
io
and
that
is
taken
care
of
by
our
automation.
B
So
that
was
one
of
the
things
that
we
rolled
out,
where,
if
any
of
the
sigs
want
to
create
a
new
mailing
list
for
either
for
an
acl,
so
that
they
can
say,
hey
this
gcp
resource
to
view
this
resource
or
edit
this
resource.
You
need
to
be
in
this
google
group,
so
we
we
end
up
having
a
yaml
file
where
there
are
definitions
for
what
the
mailing
list
is.
What
the
name
of
the
mailing
list
is.
What
is
you
know
the
comment
on
the
mailing
list?
B
So
all
people
need
to
do
a
simple
example
is
when
the
new
team
release
team
starts
up,
they
go
and
update
the
yaml
files
and
people
automatically
get
access
to
all
the
resources
that
the
release
team
needs
to
do
its
job.
A
Yeah,
so
very
very
heavily
infrastructure
as
good
and
and
note
in
there
in
k
native,
I'm
I'm
steering
4k
native,
so
we
just
joined
cncf,
but
we've
been
doing
a
lot
of.
We
took
that
code
that
that
tooling,
to
create
those
declarative
way
of
declaring
the
groups
and
who
belongs
to
them,
so
they
can
have
access
to
this
resource
in
the
gcp
project
or
that
resource
and
gcp
project,
and
we
brought
that
into
k-native,
so
we're
there's
other
projects
benefiting
from
from
this
work.
A
So
if
you
join
this
sig,
your
contributions
is
not
just
going
to
benefit
kubernetes,
but
it
may
benefit.
Anyone
in
the
world
in
our
case
will
be
another
cncr
project
like
like
canada,
because
we
are
using
this
type
of
automation.
So
we
have
a
way
of
like
spending
our
time
in
other
things
than
than
trying
to
edit
and
configure
things
in
consoles
with
the
mouse,
and
things
like
that.
A
So
for
for
those
that
want
to
learn
more
about
it,
I
put
the
link
in
the
in
the
in
the
youtube
stream,
but
basically
every
sig
has
this
format
they're
in
the
kubernetes
community
repo.
So
you
will
find
in
the
readme.md
and
actually
we
we
do
automation
everywhere,
so
there's
actually
a
json
file
behind
this
or
a
yama
file.
A
If
I
remember
that
the
drives
these
creation
of
these
readme
files,
that
have
consistency-
but
you
can
see
here
when
when
do
they
meet
the
wednesdays
at
20,
utc
bi-weekly,
you
can
actually
see
what
are
they
they're
talking
about
join
the
zoom.
So,
as
my
recommendation,
one
person
that
I've
learned
lately
is,
I
have
sent
people
here.
I
think
I
sent
muhammad
from
kennedy.
A
He
wanted
to
learn,
and
it's
just
like
just
look
around
like
just
just
join
the
meeting
and
and
listen
like
that's
that's
to
be
your
first
step
in
your
joining
just
just
listening
to
what
they're
doing
maybe
they're
talking
about
this
mirroring
of
the
images
across
the
crowd
providers,
maybe
you're
interested
in
the
proxy.
Maybe
you
want
to
look
at
the
code
and
get
interesting
in
there
and
and
then
learn
and
then
help
out
right,
raise
your
hand
saying
like
I
want
to
shadow
someone
that
is
doing
that.
A
So,
that's
something
that
maybe
we
we
infuse
in
other
sticks
like
that
shadow
thing
of
like
being
a
shadow
first,
not
taking
responsibility
for
something
and
then
until
you
feel
comfortable.
So
I
know
dean
seems
to
to
leave.
I
think
he's
he's
late
to
the
hell
his
other
meeting,
but
any
anything
that
people
want
to
add
or
deems
yourself.
B
Nothing
from
me
thanks
a
lot
for
the
opportunity
to
you
know
pitch,
for
you
know
something
which
is
really
close
to
my
heart.
Thank
you.
A
Yep,
eliminating
toil,
that's
that's
what
I
call
it
right
at
the
end
of
the
day,
and
I
I
think
a
lot
of
people
in
our
space
in
devops
can
be
familiar
with
that,
and
some
people
that
want
to
get
into
this
space
right
they
want
to
learn.
Hey
the
best
bet
to
learn
is
like
joining
an
open
source
project
and
learn
about
this
with
real
things.
These
are
real
infrastructure.
Real
resources
actually
cost
money.
Cncf
pays
for
it.
A
A
Thank
you
themes,
yeah
bye,
so
there
you
have
it
there's
a
maybe
there's
a
question
in
the
shot.
I
think
I
saw
see.
Can
someone
send
a
link?
A
Oh
yes,
that
that
was
in
the
every
sig
we
have
76.
So
this
is
community
case
infra.
Every
readme
will
have
the
recording,
so
you
actually
can
see
what
they
what
they
talk
about
in
the
last
one,
and
then
the
link
to
the
to
the
google
doc,
which
I
can
copy
into
the
chat
and
besides
these
are
documents
that
are
free.
You
don't
have
to
belong
to
a
cig
or
have
special
permissions.
A
These
are
docs
that
are
like
anyone
with
the
link
can
access
and,
as
you
can
see
deems
is,
is
the
share.
Our
node
is
another
share
and
you
have
the
the
leads
here
so
reach
out
in
slack,
there's
a
there's,
a
channel
and
also
through
the
meetings.
If
you
can
attend
that's
good,
if
you
cannot
attend
because
of
the
time
right
globally,
you
can
just
join,
join
the
slack
re
watch
the
recording
and
then
ask
what
you
can
help
or
who
you
can
shadow.
A
So
that's
that
one.
Let
me
hide
this
one:
add
the
link.
E
A
Yeah,
yeah
and-
and
I
think
I
try
to
highlight
when
there
are
there-
are
many
seeks
that
people
can
contribute,
and
you
don't
have
to
have
six
years
of
go
programming
and
go
profiling
right.
That's
usually
that's
the
perception
that
people
think
that
to
join
kubernetes.
A
You
have
to
have
this.
You
know
background
of
being
an
expert
in
in
golan,
because
you
know
the
majority
of
controllers
and
everything
is
go,
but
we
have
code
that
configures
infrastructure
right
configuring,
the
plugins
for
brow,
configuring,
the
the
github,
this
automation
for
github.
He
didn't
mention
that
the
automation
for
the
mailing
list,
automation
for
the
groups
and
a
lot
of
those
things,
is
infrastructure
as
go
like
it
could
be
terraform.
It
could
be
bash
scripting,
a
lot
of
bash
scripting,
so
a
hint
is.
A
If
you
want
to
see
a
lot
of
batch
scripting
look
for
the
hack
folder,
that's
usually
hack
folder,
and
there
will
be
a
lot
of
shell
scripting
in
there,
which
is
nothing
wrong
right
and
we
get
a
lot
done
with
that.
But
yeah,
that's
that's!
You
know
dns
also
another
thing
that
maybe
you're
familiar
with
a
lot
of
dns
records,
how
to
create
them.
How
to
create
external
dns
also
is
useful
for
those
type
of
things.
A
So
with
that,
I
think
we
can
move
to
the
questions
and
answer
section.
So,
let's
move
to
the
any
any
comments
before
we
move,
so
I
I
think
I
I
will
have
zig,
I'm
not
promising
anything,
but
I'm
trying
to
get
six
security
next
next
month,
so
that
that
would
be
a
good
one.
The
security
is
becoming
a
hot
topic
lately,
so
let
me
bring
so
we're
done
with
this.
This
is
the
blog
people
wanted
to.
Let
me
share
this
on
the.
A
And
for
the
hack
md,
the
hacking
is
just
for
internal
purposes
for
us,
but
the
first
question
that
I
found.
I
know
if
you
folks
have
found
any
questions
but
usually
in
discuss.
A
A
A
So
usually,
what
I
do
is
I
I
look
in
in
discuss
for
the
latest
questions
that
they
don't
have
an
answer.
So
this
one
is
the
the
first
one,
which
was
very,
very
interesting,
expose
node
information
to
a
container
asking
coming
out.
I
need
to
okay,
so
it's
kubernetes
120,
which
I
think
is
using
burn
metal,
but
it
says
hi.
I
need
to
expose
node
related
data
like
the
node
names
or
labels
as
a
variable
or
inside
the
container.
A
So
he
wants
to
get
information
about
the
notes
from
I
guess,
from
a
con
inside
a
container
inside
a
pod.
It
could
be
like
a
pot
that
it
does
infrastructure
things.
It
could
be,
like
you
say
something
for
the
sre
team,
something
like
that
about
the
cluster,
I'm
guessing.
It's
the
same
cluster
that
he's
running
on.
D
Yeah
but
but
I
I
put
a
few
answers
actually
on
this
one
like
first,
there
is
actually
open
issue
like
feature
request
around
that
which
has
never
been
closed.
So
that's
definitely,
you
know,
maybe
plus
one
if
you're
looking
for
this
type
of
functionality-
and
you
know
that's
why
it
would
be
nice
to
have
diems
on
call.
So
you
can
actually
comment-
and
you
know
how
we
can
push
this
request
faster
forward.
Do.
D
Let
me
put
it
in
the
chat,
but
there's
also
in
the
slack
comment
of
the
hackmd.
I'm
gonna
put
it
here
as
well
and
yeah
like
the
they're
off.
You
know
some
people
what
they
do
is
basically
they're
using
it.
You
need
container
that
is
running
cute
cuddle
and
you
basically
can
run
a
command,
and
it
is
a
neat
container
on
on.
D
You
know
to
get
the
node
labels
from
your
from
your
node
and
then
expose
it
as
an
environment
variable,
and
then
you
can
read
it
on
your
part
right.
So
this
is
like
not
a
most
optimal
way,
but
this
is
like
a
kind
of
a
hack
that
you
can
do
yourself
in
order
to
get
this
information.
There
is
a
feature
in
kubernetes
as
well
that
has
been
recently
added
called
pod
topology
spread
containers
I
put
at
the
link
as
well.
D
It's
like
one
of
the,
I
guess
use
cases
and
but
like
for
the
specific
answer
that
for
the
specific
question
like
that
they're
trying
to
use
on
bare
metal-
I
I
don't
know
like
any
other
solution,
then
just
use
keep
cuddle
at
this
moment.
I
don't
know
if
you,
if
you
guys,
have
any
other
ideas
how
this
can
be
achieved.
E
So
to
download
a
download
api
will
actually
give
you
the
node
name,
so
you
can
actually
get
it
from
the
pod
spec.
So
you
can
say
spec
dot,
node
name,
you
can
put
it
in
an
environment
variable
or
you
can
put
it
in
a
file
either
of
those
and
then
the
second
part
which
archie
already
mentioned,
have
an
init
container
with
cube
cutter
which
will
go
and
query
that
nodes
labels
and
maybe
put
it
in
a
file
or
maybe
put
it
on
an
environment
variable.
E
So
that's
how
any
node
specific
information
you
will
have
to
query
it
using
cube
ctl,
but
the
problem
that
we
might
that
the
person
might
run
into
is
getting
the
node
name
where
the
part
is
scheduled
that
you
can
actually
get
from
download
api.
A
So
I
believe
that
the
downward
api,
so
the
to
summarize,
as
out
of
the
box,
I
think
maybe
comments
said
I
didn't
see
that,
but
I
think
he.
The
first
comment
was
the
same
thing
I
was
going
to
say,
like
you
know,
exposing
node
information
metadata
to
a
pod
by
default.
That's
a
security
risk
because
that
that's
a
leaky
abstraction
like
a
pod
should
not
know
where
is
it
running
in
which
cloud
and
which
zone
like
it
should
be
like?
A
So
it
has
the
flexibility
to
run
on
any
node
and
also
not
know
anything
about
the
same
way.
The
container
doesn't
know
that
it's
running
in
a
pod
right
can
be
running.
Your
computer
can
run
in
a
pod.
The
same
thing
for
the
pod
should
be
like
not
aware
of
where
the
specifics
on
it,
but
there
might
be
reason
that
you
want
that
information.
So,
like
yoga
said,
two
things
that
are
usually
exposed
in
the
pod
is
the
the
node
name,
I
believe,
and
also
status
host
ip.
D
A
So
those
two
things
are
things
that
you
can
you
can
you
can
get
so
do
you
know
which
node
you
want
the
information
from.
So
that's
the
first
step.
Maybe
you
don't
want
information
of
all
the
nodes.
You
want
information,
more
information
about
the
labels
of
the
node
that
I'm
running
on
so
that'll
be
the
first
thing
to
to
interrogate
and
you
can
get
out
of
the
box
without
permissions.
A
So
the
next
thing
is
then
you're
going
to
be,
like,
I
could
say,
like
cups,
detail
and
any
container
so
either
you
modify
your.
If
you
have
one
single
container
that
container,
then
it
needs
to
talk
to
the
cube
api.
A
A
Like
how
how
do
I
get
access
to
to
like
do
a
get
list?
Node,
then
you
will
be
then
you'll
be
modifying
the
service
account
permissions
so
start
by
giving
like
the
minimum
information
that
that
service
account
needs
to
get
that
information.
But
then
I
think
arkey
brought
a
good
idea,
which
is
the
need
container.
So
maybe
you
want
to
you:
don't
have
access
to
the
code,
maybe
you're
you
don't
have
access
to
the
container.
Maybe
the
container
is
as
a
contract.
That
says
like.
A
I
need
these
five
variables
and
it's
done
by
another
team
or
another
vendor,
and
I
need
this
information
here.
I
think
the
good
a
good
pattern
is:
what
are
you
saying
like
creating
any
container
that
has
a
busy
box
or
something
like
a
shell
script
which
is
you're
starting
from
scratch
out?
You
know,
and
you
are
maybe
one
of
the
people
we're
talking
about
you're,
not
a
programmer,
maybe
you're
a
devops
person
and
using
bash.
A
It's
fine
right,
we're
talking
about
you,
don't
have
to
be
ashamed
of
using
bash
and
you
skip
ctl
and
run
an
image
on
that.
Any
container
that
boots
up
the
service
account
needs
access
to
the
cube
api
server
to
have
access
to
interrogate
the
node
labels,
and
maybe
that's
the
only
thing
that
you
need
but
take
into
account.
Then
the
other
container
will
also
have
access
to
that.
So
that
would
not
impede
that
second
container
from
getting
that
information
directly
or
getting
more
information
that
it
should.
A
So,
if
you
it
depends
how
the
r
back
permissions
are
granular
enough,
if
it
just
can.
Just
if
you
just
need
the
labels,
maybe
listing
nodes
is
enough,
but
not
getting
the
data,
but
listing
also
gets
the
whole
data
of
the
node,
so
be
careful
listing
investors
get
in
our
back.
I've
been
beaten
by
that
before,
like
I
give
listing
thinking
that
you
don't
get,
you
cannot
read
the
object
but
actually
listing
you
can
get
the
whole
object.
So
don't
get
beaten
by
that,
so
you
will
get
like
oops
like
cuba.
A
City
will
get
node,
no
one,
all
the
ammo.
You
will
get
all
that
information.
The
other
container
does
so
yeah
write
a
bash
script
in
that
unique
container.
I
think
it's
a
good
idea
get
those
values
with
curl
or
vacuum.
Ctl
make
those
variables
like
environments,
that
ini
files
and
put
them
in
a
share
folder
with
empty
there
right
and
then
the
next
container
first
thing
it
needs
to
do
is
like
load
those,
though
that
file,
because
you
cannot
create
environment
environment
for
that
second
container.
A
So
I
think
that
summarize
kind
of
the
ideas
we
were
talking
about
right,
but
you
have
to
be
careful
because
if
you
just
want
to
share
the
labels,
that
second
container
will
actually
read
the
whole
yaml
like
capacity
of
that
of
that
node
right.
How?
How
much
memory
has
all
that
information
about
the
dominant
major
seekers?
But
it
might
be
a
security
hole
so
yeah.
I
didn't
know
about
this
link
so
I'll
construct
the
answer
and
post
it
and
discuss.
What's
the
status
of
this
issue.
A
It's
still
open
yeah
but
like
like
it's
a
bad
thing,
not
a
bad
thing,
we're
exposing
it's
it's
difficult,
because
you're
yeah,
then
people
were
like
I
want
to
be.
I
want
the
bot
to
be
exposed
to
anything
in
the
cluster
right.
Well,
we
have
our
back
there,
so
you
is
it's
another
pod.
Just
use
our
back
service
account
to
get
the
information
that
you
need.
Maybe
the
pot
should
not
have
access.
A
So
it's
a
tricky
problem,
but
I
think
the
pattern
that
we
provide
is
like,
I
think,
the
best
one
that
that
we
know
about
a
new
trade-offs
of
giving
that
service
account
permissions
to
get
that
information.
Yeah,
you
can
request,
you
can
talk
to
the
cluster
api,
close
the
grip
api
and
get
anything
that
you
want
in
there.
A
E
Most
of
the
time,
most
of
the
time
when
I
have
actually
run
into
this
particular
question
like
how
do
I
get
node
information
labels
and
all
that
most
of
the
time,
this
has
been
for
sort
of
enforcing
some
sort
of
anti-affinity
kind
of
practice.
So
it's
like
if
you
have
brought
up
a
pod
on
the
same
host
for
some
cluster
of
your
application
and
all
it's
typically
for
those
kind
of
use
cases
that
I
have
seen
this
question
being
asked.
A
Which
is,
I
think,
it's
a
newer
than
the
anti-affinity
pattern.
I.
A
E
A
I
think
in
newish-
well
it's
just
116
here.
So
oh
and
I
think
this
is
the
the
oh
and
the
regular
no
label
but
yeah,
and
if
you
are,
I
work
with
openshift
a
lot.
Openshift
is
downstream
also
look
into
your
vendor
right.
It
could
be
that
you're
not
using
vanilla,
kubernetes,
so
also
look
into
your
downstream
or
kubernetes
vendor,
and
they
might
have
a
feature
that
you
can
leverage.
A
A
It
was
interesting
like
for
us
that
we
have
been
like
work
in
this
like
yeah.
It's
in
any
container
keeps
it
like.
Doesn't
work
like
you
will
try
and
say:
oh
yeah,
it
doesn't
work.
I
need
to
give
a
permission.
Then
cluster
admin
right
to
the
service
account.
Then
it
got
it
working
and
then
wait.
Wait,
wait!
Wait!
E
A
If
someone
is
watching-
and
you
want
to
write
a
little
blog
post
with
a
little
example
and
and
share
it
in
discuss
like
that,
will
help
the
person
right
even
more
than
just
writing
an
approach,
how
we
doing
time
the
next
one
is
this
one.
I
think
this
is.
We
may
not
have
a
different
answer.
Let
me
see
if
it's
up,
so
this
is
a
person
that
is
running
on
premise,
kubernetes
clusters.
A
It
has
bare
metal
servers
and
vms,
and
I
think
his
answer
is
asking
any.
He
includes
hpe
apollo,
which
I
would
not
familiar.
A
That's
hpc
so
bullets,
but
he
has
dell
servers
and
vmware
all
of
them
running
linux
and
it's
the
approach
of,
and
they
have
different
sizes
like
you
can
have
sizes
in
terms
of
cpus
and
and
and
memory
and
storage
is
another
angle,
but
I
think
he
was
asking
is
like
if
I
have
vms
from
different
sizes,
and
I
have
bare
metal
systems
of
different
sizes,
should
I
have
one
co,
it's
a
good
idea
to
have
one
cluster
with
all
nodes
and
different.
A
E
E
So
this
this
has,
this
has
actually
been
there
for
the
longest
time
right.
You
would
always
have.
Even
if,
let's
say
all
your
servers
are
vms
right,
so
you
will
have
different
capacity
vms.
I
think
in
bare
metal.
It's
not
prominent,
because
you
might
actually
start
with
a
set
of
servers
and
then,
after
some
hardware
options
or
upgrades,
you
might
actually
get
newer
servers
and.
E
Yeah,
so
I
mean
the
assumption
there
is
that
okay,
your
workload
is
agnostic
of
all
those.
You
know
hardware
like,
for
example,
now
we
might
even
see
some
arm
based
like
yeah.
A
E
Yeah
exactly
so
that
that
is
one
part
of
it
right
so
unless
and
until
you
don't
have
any
issues
with
that
part
right,
all
of
them
are
running
x86
architecture
and
they're
all
running
linux.
So
they
could.
I
mean
you,
can
have
one
big
cluster.
There's
no
problem
with
that
right.
You
can
definitely
have
it.
You
can
assign
labels
on
different
nodes
and
their
topology,
and
I
believe
this
topology
spread
con
constraint
that
archie
actually
posted
it
might
even
help.
E
But
let's
say
if
you're
just
starting
out
and
then,
if
you're,
if
you're
engineers
who
are
building
applications
and
deploying
them-
and
they
don't
have
a
whole
lot
of
experience
with
kubernetes,
I
would
recommend
keeping
it
simpler
and
just
like
keeping
them
on
different
clusters
rather
than
having
one
large
giant
cluster.
D
Yeah,
I
I
you
know
this.
I
got
this
question
every
time
from
you
know
many
many
users
and
customers
and
like
people
who's
using
kubernetes
like
I
think
it
really,
as
you
said,
depends
where
you
are
in
your
journey.
If
you
on
the
cloud,
there's
no
question
like
I,
I
go
with
more
clusters
because
you
know,
control
plane
is
managed.
You
can
just
one
click
away
from
creating
cluster
or
running
a
terraform.
You
know
to
build
your
kubernetes
on
demand
like
it
takes
literally
less
than
a
minute
to
create
a
kubernetes
cluster.
D
So
obviously
it
simplifies
everything:
security
multi-tenancy,
it's
simple
for
everyone
to
deploy
workloads
securely
on
that
separated
kubernetes
clusters.
Obviously,
if
you
get
to
the
scale,
when
you
have
more
than
thousand
kubernetes
clusters,
then
then
probably
start
thinking
like.
Maybe
I
should
either
have
a
way
to
manage
all
of
this,
because
I
need
to
upgrade
so
once
you
get
to
that
scale
in
the
cloud.
D
Potentially,
you
will
be
looking
into
consolidating
your
workloads
if
you're
running
on-prem,
it's
a
little
bit
different
story
because
on-prem,
you
know
creating
a
cluster
depending
what
type
of
solution
you're
using
it.
Might
take
from
minutes
to
hours?
To
I
don't
know,
I
don't
want
to
say
months
but
like
it's,
it's
becoming
a
little
bit
more
complicated
the
process,
so
people
on
prem
they
tend
to
think
about
yeah.
D
I
want
to
have
less
clusters
because
it
takes
longer
time
to
provision
stuff
so
like
if
it's
a
bare
metal
and
if
it's
not
automated
process,
potentially
you'll
be
looking
to
adding
more
nodes
and
having
a
bigger
cluster,
and
then
you
really
need
to
understand
your
teams,
how
you
know
trust
domain
between
your
teams
and
the
applications
right
and
if,
if
they
there
are
sensitive
workloads,
potentially
like
stock,
two
compliant
workloads,
maybe
you
want
to
put
it
in
a
separate
cluster
if
it's
a
dev
workloads,
if
it's
you
know
less
less,
you
know
you
have
trust
between
your
teams
and
potentially
just
create
more
namespaces,
deploy
your
your
code.
D
There
and
you
know,
use
some
kubernetes
features
like
network
policies
or
you
know,
maybe
secure
it
with
istio
with
mtls.
So
like
there's
a
lot
of
features
that
kubernetes
provides
you
to
secure
your
namespaces
from
each
other.
It's
just
brings
a
little
bit
of
complexity.
Now,
whereas,
if
you
have
you
know
cluster
per
workload,
it's
it's
super
easy
right.
They
just
create
a
namespace
on
the
application,
that's
it!
So
it's
it's
your
call
like
the
the
person
who
is
doing
it.
D
I
think
things
are
getting
better
for
the
teamwork,
because
we
have
cluster
api.
So
with
cluster
api
you
can
easily
deploy.
You
know
the
sphere
yeah,
but
if
you're
running
on
bare
metal,
unless
you
have
some
kind
of
a
magical,
bootstrap
process,
it's
it's
a
pretty
complex
setup.
C
Is
this
where
I
plug
tinkerbell
being
from
equinix,
probably
but
yeah
yeah?
C
I
I
would
look
into
that
if
you,
if
you
want
to
really
do
bare
metal
on
your
own,
having
some
sort
of
orchestration
platform
for
your
bare
metal
servers
like
a
tinker
bell
or
a
metal
cubed
or
something
along,
those
lines
might
be
a
thing
to
look
into,
but
those
are
all
relatively
early
on
projects
so,
depending
on
your
team
size,
you
may
not
want
to
have
to
invest
all
that
time
into
learning
those
and
get
them
set
up
the
day.
C
C
It
depends
right
personally,
I
would
think
if
you're
doing
it
yourself-
and
you
really
don't
want
to
use
a
cloud
then
do
what
you
can
to
give
yourself
as
much
of
a
cloud
as
you
can
so
having
vmware
on
top
of
everything
to
make
it
all
uniform.
Would
I
think,
help
you
a
lot
to
make
it
simpler,
but
if
you've
got
the
resources
and
the
people
to
expand
and
explore
these
other
options?
F
The
question
is
about
running
heterogeneous
versus
homogeneous
hardware
clusters
right.
A
F
Would
I
don't
have
a
lot
of
experience
here
when
I
ran
on
bare
metal?
We
ran
homogenous
hardware
as
much
as
we
could
and
very
specifically,
we
had
different
hardware
like
in
a
data
centric
use
case,
and
we
had
different
cluster
management
there
like
hadoop,
for
instance
or
spark,
so
we
didn't
have
kubernetes
or
homeworld
cpm.
F
So
it's
a
little
bit
easier
for
us
back
then
to
run
something
that
understood
the
specific
needs
of
that
particular
like
class
of
work
data.
So
it
knew
how
to
exploit
that
particular
hardware
better.
So
a
generic
solution
like
kubernetes,
if
it
also
understood
you
know
that
these
containers
needed
to
be
scheduled
in
a
certain
way
and
exploit
the
hardware
in
a
certain
way
would
be
the
best
bet
in
my
opinion.
A
Yeah
yeah,
I
I
agree
with
with
all
the
thoughts,
so
one
one
piece
of
advice
is,
I
think
chris
pointed
out.
Maybe
maybe
you
want
to
put
vms
on
those
parameter.
Servers
like
that
would
be
kind
of
a
good
choice
like
hey,
let's,
let's
not
try
to,
because
it
depends
on
your
learning
journey
right.
A
I
don't
know
if
this
lab
had
been
around
for
20
years
and
they
have
this
old
hardware
or
they
just
bought
it
right,
and
then
they
have
new
people,
but
I
think
today
having
a
hypervisor,
it
could
be
the
vmware
esxi
or
the
open
source.
It
could
be
the
vsphere
or
it
could
be
kvm
put
put
a
hypervisor.
I
think
that
would
give
you
a
lot
of
benefits
in
terms
of
snapshotting
backing
up
by
managing
the
vms
versus
managing
one
single
operating
system,
trying
to
make
it
one
thing.
It
depends.
A
I
think
argus
said,
like
the
number
of
servers
that
you
have.
If
you
have.
Let's
say
you
have
eight
servers
and
you're
going
to
create
two
clusters,
and
you
have
three
masters
you
you
know
like
each
cluster
having
one
node,
that
you
don't
have
a
cha
right
so
consider
having
like
what
is
your
a
day,
two
like
christmas
and
day,
two
operations,
high
availability.
A
So
if
you're
in
a
physical,
I
I
I
usually
say
I
grew
up
in
a
in
a
in
a
data
center,
because
I
now
have
started
an
ibm
with
with
system
system
x.
It's
like
noise
all
day
but
yeah,
your
hc,
a
power
and
cooling,
your
availability
zones.
So
if
you
have
the
servers
of
this
of
the
size,
like
you
have
to
your
your
server
planning,
I
put
in
servers
that
I
can
sustain
the
capacity
of
having
redundancy.
A
If,
like
you,
lose
power
to
half
of
the
lab
right,
you
want
to
continue
working,
so
the
other
half
of
the
lab
has
ac
and
cooling
and
power
until
you
recover.
So
you
need
to
spread
those
workloads.
So
if
you
have
two
clusters,
then
you
don't
have
enough
fasters
to
to
spread
them.
So
it's
better
to
have
one
cluster
to
have
those
those
more
worker
knows.
A
So
you
have
a
high
availability
and
you
don't
you
know
waste
those
those
master
nodes
for
because
of
redundancy,
so
high
availability
using
vms
in
and
then
I
think,
yogi
said
like
the
learning
curve,
because
one
thing
is
like
you
throw.
You
know
you,
you
say
like
I'm
going
to
create
a
pod
or
a
deployment
and
hopefully
we
we
are
leaving
the
storage
to
the
side,
because
stateful
sets
right,
you're
running,
maybe
statuses
and
databases,
and
you
have
maybe
fiber
channel
or
iscsi
or
local
storage.
A
That's
it
that's
a
different
thing
of
like
how
you
attach
that
and
who
can
access
that
storage,
but
let's
say
you're
running
stateless
servers,
stately
puzzles
or
workloads
that
they
can,
they
can
fluctuate
like
they
can
be
deleted
in
one
you
can
scale.
You
can
scale
out
easily
or
scale
down
like
auto
scaling.
A
It
could
be
on
auto
scaling.
So
what
I
wanted
to
say
was
what
yogi
said
like
when
you
run
these
pods
and
you
run
kubernetes
by
default.
Usually
it
doesn't
come
with
all
the
customization,
so
the
scheduler
will
put
the
pot
like
where
it
fits
based
on
like
how
much
is
available.
So
if
it
sees
a
big
vm
or
a
big
worker
node,
usually
all
the
pots
would
like
go
there.
A
Maybe
you
don't
want
to
go
there,
so
we
talked
about
the
pot,
the
pot,
topology
constraints
and
the
anti-affinity
and
affinity,
and
then
how
do
you,
or
even
like
there's
like
custom
schedulers,
you
can
say
like
from
on
weekends.
I
don't
want
these
notes
in
in
here
right.
You
can
have
custom
schedulers
that
say
like
based
on
the
load
and
based
on
whatever
so,
but
that's
a
learning
curve.
A
You
train,
where
you
want
things
to
fluctuate,
so
they
put
lands
in
the
in
the
right
place
when
something
happens
and
that's
a
learning
curve,
so
start
simple
and
then
once
you
know
like
you
can
play
with
labels
and
ability
zones,
then
you
can
have
bigger
clusters
with
everything
inside
and
there's
nothing
wrong
with
having
a
big
cluster,
but
take
into
account
when
you're
going
to
upgrade
the
whole
cluster.
A
Do
you
do
it
one
by
one
and
then
that's
where,
like
oh,
we
spent
a
whole
month
upgrading,
but
we
spent
one
week
installing
right.
So
your
your
trade-off.
So
I
think
it's
a
big.
It
depends
I'm
also
looking
to
this
one
specifically
to
to
like
on-premise
a
data
center.
It
looks
like
you
say
it's
a
home
lab
or
like
a
data
center
that
you
own
in
your
company,
but
that
that
same
advice
like
argus,
said
it's
maybe
not
applicable
when
you're
in
the
cloud,
because
we
have
things
like
I
don't
know.
A
One
example
is
eks
has
carpenter,
it's
an
open
source
project.
Carpenter
can
work
with
aks,
but
it
can
work
with
other
types,
even
on
premise
that
you
can
create
node
groups
from
different
sizes,
so
you
can
actually
not
even
create
node
groups.
You
just
like
based
on
the
demand
of
the
pod
it
brings
in
a
node
and
that
node
can
be
big
or
could
be
small.
A
D
A
Yeah
we
have
one
about
argo
cd,
which
I
think
that's
very
generic,
and
then
I
think
this
one
is
a
is
a
good
one
and
we
can
take
a
few
minutes
on
this
one,
and
I
think
this
one
is
a
maybe
misconception
on
deployment
versus
stateful
sets.
A
It
says
if
there's
any
benefits
of
using
relational
databases
on
kubernetes,
I'm
guessing
deploying
relational
databases
in
kubernetes
understand
case
allows
to
implement
a
relative,
easily
horizontal
scaling
deployment,
rollback
and
deployment
and
enrolling
deployments,
and
it
says,
like
it,
doesn't
know
how
a
database
relational
data
do
not
scale
horizontally.
A
Having
explicit
schema
implies
no
rollback.
In
general
cases,
rolling
deployments
may
mean
having
two
versions
of
the
code.
Talk
to
one
version
of
the
schema
can
be
problematic
in
general
case.
New
changes
must
be
compatible
with
the
old
schema.
Am
I
missing
anything?
Am
I
right?
A
I
cannot
take
advantage
of
horizontal
scaling
deployment
rollback
deployments
in
general
kubernetes,
so
I
think
this
is
a
maybe
a
confusing
person
that
that
is
thinking,
databases,
running
databases
as
deployment,
but
I
I
wanted
to
see
what
other
people
think
and
then
and
then
highlight
like
what
is
what
what
are
stateful
sets.
D
Yeah,
maybe
I
can
who
anybody
wants
to
think.
D
Yeah,
so
I
I
think,
yeah,
like
the
this
user
is
in
in
the
mark,
is
in
the
beginning
of
the
journey
right.
Obviously,
kubernetes
provide
interesting
capabilities
that
can
be
helpful
for
upgrades
and
stuff
like
that.
But
if
you're
using
kubernetes
deployments
they're
not
aware
about
your,
you
know,
application
and
database
right,
so
deployments
are
not
friendly
to
deploy
stateful
applications
because
they
don't
have
knowledge.
So
we
created
stateful
sets
and
kubernetes.
D
That's
supposed
to
you
know,
simplify
a
little
bit
this
journey,
so
it's
basically
allowing
you
to
map
specific
volume
to
the
specific
you
know,
stateful
like
set
right
so
like
if
you're
gonna
scale,
it's
gonna
scale
with
the
same
volume
and
it's
gonna
have
some
capabilities
like
you
know,
scaling
one
two,
three
four
like
in
in
the
in
some
sort
of
sequence
and
then
scale
back
in
the
sequence,
so
it
it
provides
a
little
bit
better
awareness
for
for
databases.
D
D
You
know
a
great
solution
for
production,
because
there's
some
knowledge
that
databases
have
and
each
database
has
its
own
knowledge
and
that's
why
our
community
worked
on
the
new
set
of
tools
and
configurations
what
we
call
kubernetes
operators
so
kubernetes
operators,
they
let
you
insert
some
knowledge
about
that
specific
database
and
you
know,
create
some
features
like
backup
restore.
D
You
know,
upgrade
screen
stuff
like
that,
so
that
usually
those
operators
created
from
that
vendor
or
like
the
company
that
are
strong
in
that
space,
so
maybe
for
the
postgres.
You
can
find
crunchy
base
operator
and
this
company
have
knowledge
around.
You
know
this
type
of
solution,
so
obviously
they're
the
ones
contributors
to
this
project.
So
we
have
a
a
place
where
you
can
basically
go.
I
forgot
the
name
which
is
catalog
of
operators
yeah,
so
you
can.
A
E
D
And
you
know
find
that
operator,
let's
say
for
the
postgres
and
give
it
a
try,
and
it
has
already
all
these
functionalities
for
the
upgrades
scaling
and
you
know,
taking
advantage
of
you
know
all
the
kubernetes
niceness
right.
That
said,
you
know:
do
you
want
to
go
into
that
business
and
try
to
figure
out
how
to
run
databases
if
you're
running
on
the
cloud
provider?
I
would
probably
look
into
you
know.
You
know
whatever
cloud
you're
using
solutions.
D
So
if
you're
using
google
cloud
you're,
probably
going
to
use
cloud
sql
that
easily
lets,
you
create
a
database
right
and
then,
if
you're
running
amazon
it
could
be
rds
and
and
so
forth.
I've
seen
some
customers
really
going
to
use
kubernetes
because
they
they
are
specialized
in
that
business
and
they
really
need
maybe
some
specific
features
that
even
cloud
provider
doesn't
provide.
D
So
it's
it's
very
complex
question
and
we
also
have
a
cncf
project
called
d-tess
that
let
provides
you
like
horizontally
scalable,
my
sequel
solution,
so
that
that
could
be
another
approach
as
well,
but
yeah.
It
really
depends.
D
What
is
your
business
requirements
and
if,
if
you're
running
on
the
cloud,
I'll
probably
first
start
looking
into
managed
services
and
then,
if
you
have
a
special
edge
cases,
then
I
will
probably
look
in
kubernetes
operators
that
provides
this
capability
and
if
you're
running
on-prem
you're
probably
going
to
look
on
operators
more
than
on
anything
else.
A
Yeah
yeah,
I
agree.
Here's
one
one
document
I
I
think,
like
you,
said
the
beginner
markets
on
the
beginning
journey.
So
this
is
this
is
one
resource
if,
if
he
walks
through,
if
he's,
maybe
he
already
knows
deployments
and
he's
thinking
about
running
a
a
container
or
how
he
have
seen
for
better
or
worse.
A
Like
an
example
like
there's
many
examples
out
there,
if
a
deployment.yaml
that
has
a
mysql
dot
container
right
and
people
just
run
it
like
docker,
compose
right
and
then
like
it
got
it
working
like,
I
was
like
wait
a
minute.
How
is
that
going
to
work
like
the
real
application
right?
I
cannot
have
like
a
docker
compose
or
or
deployment
just
specifying
the
the
mysql
one
taking
a
look
into
the
next
step
would
be
like
understanding.
A
Stateful
sets,
which
is,
is
understanding
that
type
of
resource,
and
you
know
it
goes
through
example.
It
goes
through
an
example
of
my
sequel,
so
it
tells
you
about
the
the
primary
the
master,
the
master
one
and
then
the
replicas
and
then
how
the
replicas
you
know
you
can
scale
the
wrap,
the
read,
replicas
and
so
on,
and
then
the
next.
A
The
next
stage
is
operators,
mostly
at
this
point,
there's
many
capabilities
that
are
in
the
system
of
the
database,
that
you
need
an
operator
and
most
of
the
databases
well
known
as
databases,
the
either
there
have
a
paid
operator
or
open
source
with
support,
paid
or
just
open
source
for
the
operator,
and
then
yeah
run
if
you're
going
to
run
a
database
in
your
kubernetes
cluster,
run
an
operator
and
then
lastly,
arkit
said
like,
if
you're
going
to,
if,
if,
if
you're
new
to
kubernetes,
even
though
you're
new,
if
you're
thinking
about
running
your
own
database
on
kubernetes,
stop
stop.
A
A
It
could
be
a
freemium
free
start
with
that
and
and
see,
because
what
you
buying
is
a
lot
of
operation
cost,
and
usually
that
comes
with
toil,
not
money,
but
oil,
which
is
time
and
time
and
effort
and
pain
of
human
beings,
and
that's
the
that's
the
most
valuable
resource
that
you
don't
want
to
get
into.
So
the
advice
is:
if
you're
going
to
do
it
stop
and
look
for
reasons?
A
Why
not
to
do
it
right
and
then,
when
you
go
down
the
path
and
maybe
in
your
organization,
there's
someone
that
knows
very
well
what
they're
doing
and
say
like
no,
this
is.
We
have
proven
that
running
our
database
in
our
kubernetes
clusters
makes
sense
and
we're
going
to
do
one
more
one,
more
thing:
we're
going
to
run
it
in
a
separate
cluster
and
create
it
as
a
database
as
a
service.
I've
been
you
know,
telling
some
clients
like
okay,
we're
running
a
different
cluster
and
have
the
dba
team.
A
The
db
team
provide
the
database
as
a
service
to
us
right
and
maybe
have
it
in
separate
cluster.
They
can
argue
that
they
want
to
be
in
the
same
same
cluster
and
then
deal
with
multi-tenancy,
but
having
a
different
cluster.
Have
that
team
deal
with
running
that
database
and
then
the
the
apps
team?
You
need
a
database.
Well
then
you
do
an
external
connection
or
a
peer
connection
or
submariner
or
service
mesh
connection
to
that
database
and
hopefully
they're
in
the
same
vpc
or
same
network
they're
close
together.
A
The
network
really
doesn't
matter
but
yeah
stop,
and
I
look
for
a
reason
or
not
not
to
do
it,
but
if
you're
going
to
do
it,
operators
exist
and
there's
paid
ones
and
there's
free
ones
that
that's
summarized.
I
wanted
to
insert
that
that
no.
E
Yeah,
I
I
I
just
want
to
add
something.
Actually
I
had
the
same
sort
of
an
opinion
right
for
databases
and
kubernetes,
but
I
actually
work
for
yoga
byte,
so
full
disclosure
here,
if
you're,
going
to
run
a
database
in
kubernetes,
make
sure
it's
a
distributed
database.
Yes,
because
otherwise,
if
you
are
going
to
take
a
traditional
database
and
just
try
and
scale
it
on
kubernetes,
you
are
just
scaling.
The
reads
right:
you
will
create
more
headache
for
yourself.
If
you
use
a
distributed
database,
it
actually
leverages
the
distributedness
of
kubernetes.
E
E
Otherwise,
yeah
you
will
just
invite
more
problems
and
I've
actually
responded
to
the
person.
The
other
two
points
that
they
mentioned
right.
I
think
the
challenge
is
not
just
kubernetes
specific,
it's
more
of
a
microservices
and
iteration
of
version
specific,
so
that
those
challenges
they
sort
of
remain.
So
that's
where
you
need
a
release
control,
so
you
use
tools
like
liquid
base
or
fly
away
to
do
sort
of
schema
management,
even
sqlize,
like
if
you're
in
javascript
and
all
so
for
every
sort
of
language.
E
You
have
different
set
of
tools
that
can
actually
do
the
schema
version
so
that
even
when
you're
rolling
it
back
it,
they
have
the
the
forward
and
the
backward
capabilities,
and
you
always
have
to
go
and
tick
tock
weight.
So
you
need
to
release
a
version
of
your
application
that
can
work
with
two
schema
versions.
E
Right
then,
you
migrate
the
schema
and
then
you
move
the
application.
Furthermore,
so
you
have
to
do
it
in
multiple
steps
steps,
so
I
mean
tools
like
freeway,
sorry,
flyway
and
liquid
days.
I've
used
those
they
work
really
well
for
those
use
cases
actually,
and
they
they
work
even
in
case
of
kubernetes,
also.
A
Databases,
problems
and
working
with
the
different
upgrading
and
that's
why
people
resonate
depending
on
the
application,
resonate
with
nosql
databases
and
then
sql
databases
and
graph
graph
databases
right,
graphics,.
E
E
I
mean
the
the
databases
which
sort
of
have
schema
on
read
kind
of
mechanism
which
can
broadly
be
said,
nosql
kind
of
databases,
it's
easier
when,
when
your
schema
is
very
fluid
and
but
then
you
actually
pay
in
terms
of
the
consistency
with
those.
That's
that's
the
problem.
A
Yeah,
so
I'm
I
was
showing
up
operator
hub,
yeah
and
also,
I
think
there
was
a
is
it
kubernetes
db,
some
some
project
that
was
like
collecting
all
the
other
operators
for
for
databases.
I
I
may
put
that
in
the
in
the
in
the
answer.
Also,
let
me
see
if
I
can
find
it,
but
I
think
we're
out
of
time
people
watching.
D
A
Live,
thank
you,
so
we're
going
to
conclude
any
final
thoughts
before
we
close
out
appreciate
some
folks
joining
today,
as
always
and
joining
for
an
hour
on
on
a
wednesday,
so
rk,
joggie
and
chris.
Thank
you.
So
much
for
joining
dims
had
to
go,
but
thank
you
for
representing
kate's
the
sig
for
case
infra
and
those
of
you
that
were
joining.
I
didn't.
I
don't
know
if
I
did
it
correctly
or
not.
A
If
this
thing
was
showing
up
in
twitter,
I
think
I
added
twitter
into
the
into
the
stream,
but
our
official
channel
is
youtube,
so
you're
watching
on
youtube
join
us
next
time.
You're
watching
the
recording
watch
us
join
us
live
if
you
watch
live.
Thank
you
for
joining
and
taking
out
of
your
your
day
to
join
us.
So,
hopefully
we'll
see
you
next
next
month.