►
From YouTube: Kubernetes Community Meeting 20180726
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Meeting
just
a
brief
reminder
that
this
is
a
community
meeting
and
will
be
posted
publicly
on
YouTube
and
the
Internet
is
forever
so,
whatever
you're
saying
may
or
may
not
be
replayed
I
am
still
looking
for
a
note-taker
if
you're
willing,
please
just
let
me
know
and
take
notes
as
you
see
fit,
otherwise
I'll
be
doing
it.
And
last
but
not
least,
we
will
be
having
a
glorious
demo
from
Amazon
eks.
B
B
All
right
so
working
for
everyone
perfect.
All
right,
of
course,
can
read
all
about
the.
It
was
on
any
case
on
the
link
so
from
the
main
Amazon
console
page,
you
can
find
eks
and
brings
you
to
your
UK's
page
when
you
start
you'll
get
information
about
it,
but
I've
already
got
some
clusters
created.
So
first
thing,
I'm
gonna
do
is
just
show
you
when
my
existing
clusters.
B
B
What
we're
really
configuring
here
is
the
control
plane,
part
of
the
greys
cluster.
All
the
actual
worker
nodes
are
all
part
of
the
the
users
account,
but
everything
that
makes
up
the
control
plane
is
all
managed
by
Amazon
eks.
So
what
really
configuring
here
is
the
I
guess:
interaction
between
the
two,
alright,
so
I'm
selecting
the
role.
B
This
is
the
role
that
kubernetes
will
take,
as
it
manages
resources
in
in
users
account
in
your
account,
so
make
sure
your
acting,
you
know,
restrict
stuff,
so
you
can
create
example,
load
balancers
and
manage
all
the
key
groups
that
need
to
happen
by
in
the
scenes,
but
doesn't
give
you
full
doesn't
give
Amazon
eks
for
administrative
rights
in
the
account
for
that
cluster.
So,
for
my
uses,
I'm
just
using
one
account,
but
you
could
use
any
number
of
excuse
me,
one
roll.
He
can
use
any
number
of
rolls
per
cluster.
B
B
The
security
group
is
what
is
applied
to
the
basically
the
kubernetes
control
plane,
so
that
by
default
is
locked
down
so
that
only
the
minimal
amount
of
network
traffic
can
go
between
patrol
plane
and
the
worker
nodes.
So
it's
secure
by
default.
B
So
there's
no
outside
communication
between
the
control
plane
and
the
worker
nodes.
Besides,
what's
exposed
by
example,
API
server,
my
dildo
balance
from
front
of
the
API
server
there's
no,
no,
no
direct
access,
besides
that,
because
they're
actually
in
separate
DPC's,
so
I
kicked
out
creating
that
cluster,
but
to
fast-forward
a
little
bit.
I'm
gonna
go
ahead
and
jump
over
to
a
cluster
that
I've
already
created
so
I'm
going
to
skip
the
time
takes
to
create
the
actual
commits
cluster
and
step
the
worker
nodes.
B
B
So
before
I
go
on,
I
want
to
talk
about
the
user
authentication
for
the
clusters,
so
we're
using
the
the
Authenticator
that
was
originally
created
by
hep
tio
and
then
there's
some
collaboration.
We
made
it
useful
made
it
usable
by
eks
for
the
news
we
have
so
as
part
of
leaf
canoe,
''tis.
110
change
was
made
to
climb
out
the
authentication
queue
Codel
so
that
we
have
the.
We
can
use
this
external
plugin
to
be
able
to
do
the
client
authentication.
B
So
my
talking
to
my
programming
cluster
and
since
I
created
the
cluster
I
remem
admin
by
default
there
so
going
on
what
I
just
want
to
quickly
demo
was
being
able
to
create
a
just
using
helm
tiller
to
create
a
wordpress
wordpress
site,
so
by
default.
Tiller.
Isn't
installed
so
I'm
quickly,
installing
tiller
I'm
see
so
takes
a
minute.
Let's
come
up.
So
that's
up.
B
And
gonna
leverage
helm
to
do
all
the
dirty
work
of
of
creating
everything
for
WordPress.
So
as
part
of
this,
it's
creating
the
low
bouncer,
which
is
going
to
be
a
a
a
OB
elastic
load
balancer
in
a
tub
us
and
it's
also
using
a
EBS
persistent
volume.
So
all
those
parts
are
in
my
account
in
you
know:
the
users
account
so
everything
that's
created,
I'll,
be
able
to
see
as
part
of
the
normal
ABS
console.
B
B
We
have
a
scene
I
plug
in
that
provides.
You
know,
pod
IP
addresses
without
using
an
overlay
Network,
so
it's
using
the
actual
using
IP
addresses
out
of
the
the
PPC
of
where
my
workers
are
so
no,
no
reserving
subnets
or
carving
out
address
spaces.
So
it's
just
any
address
is
available
to
either
a
node
or
a
pod.
B
B
See
it's
out
of
that
same
10.2,
10/16,
address
space
so
makes
a
management
a
bit
easier
of
the
IP
addresses
no
control
or
anything
is
needed.
It's
just
a
Dana
set
that
runs
on
each
of
the
the
worker
notes
that
allocates
the
IP
addresses
within
the
VP.
You
see
where
Thea
and
subnet,
where
the
note
is
running
and
yeah.
So
no
no
overhead
of
overlay
or
any
extra
work
with
that,
and
that
also
still
integrates
with,
for
example,
calico
networking
policy.
You
can
still
configure
networking
policy
for
for
those
nodes.
A
B
A
B
B
Don't
know
if
there's
anything
that
you
know
if
you
went
to
see
on
that,
but
those
two
notes
suppose
the
other
community
wouldn't
run.
A
B
A
B
C
C
Too
much
to
say
other
than
our
feature,
freeze,
milestone
date
is
coming
on:
Tuesday,
July,
31st
and
maybe
just
to
clarify
this
is
feature
freeze
in
terms
of
featured
definition,
not
actual
implementation.
The
implementation
freeze
comes
at
code
freeze,
which
is
not
for
quite
a
few
weeks
yet
and
September.
So
there's
just
feature
for
use.
We're
looking
to
understand
more
concretely
what
the
various
SIG's
are
targeting
for
work
across
this
cycle
and
after
Tuesday
major
future
work,
that's
coming
in
would
need
to
go
through
an
exception
process.
C
So
again,
just
reiterating
we
want
you
to
be
thinking
about
your
release,
themes,
major
work
coming
intended
for
the
112
release
and
associated
with
that,
your
plans
for
documentation
and
test
coverage
and
communicating
those
with
sick
release
and
across
your
sick
as
well
and
any
other
stakeholders
you
might
have.
So
that's
basically
it
for
where
we
are
right
now.
A
Awesome,
thank
you.
Tim
talk.
I
cannot
pronounce
his
name
I'm,
going
to
destroy
it
on
a
rude
and
there's
nothing
pending
for
a
111
branch.
At
this
time.
Moving
on
to
kep
of
the
week,
Jordan
Liggett,
if
he's
here
I
think
I
saw
him
come
in.
He
was
going
to
take
this
away,
but
just
to
give
you
an
idea,
it
is
kept
17
a
moving
component,
config
API
types
to
staging
repos
Jordan.
A
D
Taking
the
configuration
that
drives
the
core
kubernetes
components
from
a
bunch
of
loose
flags
to
actual
structured
configuration
objects,
and
so
this
kept
kind
of
lays
out
the
current
state
of
the
world
which,
to
summarize,
is
that
the
cubelet
has
blazed
a
trail.
The
cubelet
actually
has
a
configuration
file
format
that
is
defined
and
is
currently
in
beta,
which
means
that
we
commit
to
support
that
for
a
certain
set
of
version
skews
and
make
sure
that
it
can
be
migrated
to
a
v1
config.
D
D
And
so,
if
you
look
through
this
there's
a
lot
of
discussion
around
where
these
should
live,
what
they
should
look
like,
making
sure
that
we're
paying
attention
to
the
ability
to
import
from
projects
outside
of
kubernetes
kubernetes
and
make
sure
that
we
are
not
duplicating
effort.
If
there
are
common
configuration
aspects,
that
a
lot
of
different
components
need
to
make
use
of.
D
So
things
like
describing
connection
information
for
a
client
describing
some
of
the
leader
election
and
debugging
options
that
are
going
to
be
common
across
several
different
components,
making
sure
we
put
those
in
locations
where
they
can
be
reused
and
referenced.
So
I
would
encourage
you
to
look
over
this.
If
you
work
on
any
of
these
components
or
if
you
are
involved
in
driving
the
deployment
and
setup
of
any
of
these
components
again,
the
work
is
staged
out.
D
A
Awesome,
alright
I
think
there's
any
questions
in
here:
okay,
so
Jordan.
Actually,
you
are
up
first
with
sig
updates
for
shake-off.
Oh
well,.
A
D
Over
to
my
notes,
so
I
wanted
to
call
out
some
of
the
things
that
are
going
on
in
sig
off
in
the
112
release
and
thought
I'd
start
actually
by
something.
We
don't
always
talk
about
a
lot,
but
I
think
makes
a
big
difference
for
people
using
the
platform
day-to-day,
and
that's
just
some
usability
improvements
that
we've
made
in
the
1:12
time
frame
so
prior
to
112.
D
If
you
were
using
kubernetes
with
multiple
authorizers
in
play,
and
so
this
might
be
a
setup
like
like
gke
or
any
of
us
that
combines
a
webhook
authorizer
with
our
back
authorization.
So
if
you're
using
multiple
authorizers,
if
you
tried
to
do
anything,
are
back
related,
our
back
is
good
at
not
letting
you
escalate
your
own
permissions,
but
it
would
not
actually
honor
Super
User
permissions
from
the
other
non
are
back
authorizer,
and
so
you
would
see
things
like
I'm
trying
to
set
up
this
manifest
and
I
get
forbidden.
D
Despite
being
a
super
user
on
this
cluster-
and
that
was
just
a
little
crazy
making
and
there
were
workarounds
you
could
you
could
do,
but
in
112
we
will
actually
consult
all
authorizers
to
see
if
you
are
authorized
to
create
these
things,
and
so
in
112,
if
you're
a
super
user,
you
may
define
policy
on
your
cluster.
Congratulations
similarly,
prior
to
112.
D
If
you
had
multiple
authorizers
in
place,
the
error
message
that
came
back
from
the
first
one
would
be
what
was
shown
to
the
user,
and
so,
if
you
ever
tried
to
do
something
as
a
service
account
and
weren't
permitted
to
the
webhook,
authorizer
might
say,
I
have
no
idea
who
the
service
account
is,
and
so
I'm
gonna
tell
you
so,
and
that
is
a
completely
confusing
error.
And
so
we
now
indicate
errors
from
all
the
authorizers.
You
can
say
all
right.
D
Well,
one
didn't
know
about
the
user,
but
also
I
didn't
have
any
are
back
policies
for
the
Jesus
and
then.
Finally,
this
is
my
favorite
I
mentioned
that
our
back
prevents
escalation,
and
if
you
try
to
escalate
permissions,
it
gives
you
an
incredibly
detailed,
an
incredibly
unhelpful
error
message.
That
tells
you
all
the
information
you
needed
to
know
and
a
lot
of
other
information
as
well,
and
so
this
wall
of
text
is
what
you
get.
D
There
are
improvements
being
made
to
the
way
cubelet
deals
with
the
certificates,
so
some
of
the
client
go
exec
work
that
went
in
last
release
is
being
enabled
for
the
cubelet
so
that
the
cubelet
can
delegate
to
an
external
credential
provider.
This
would
let
you
do
things
like
integrate
with
a
cloud
provider
to
obtain
credentials
some
other
mechanism.
It
also
supports
requesting
and
rotating
serving
certificates.
So
you
can
now
delegate
obtaining
a
serving
certificate
to
the
CSR
API.
D
You
still
have
to
pair
that
with
an
external
approval
process,
but
that
will
be
getting
promoted
to
beta
in
112.
Work
is
ongoing
for
improving
time,
limited
and
audience.
Scoped
service
account
tokens,
and
then
there
are
our
improvements.
We
made
an
audit
we're
progressing
towards
b1
for
the
audit
event
API,
and
there
is
a
capping
process
for
dynamically
registered
webhook
backends.
So
you
could
add
and
remove
back-end
to
receive
audit
events,
and
that's
all
I
have.
F
Can
you
see
that?
Yes,
okay,
so
I,
don't
have
a
lot
of
I?
Don't
actually
have
any
slides?
It's
just
some
notes
that
we
took
in
our
our
meeting.
So
the
very
thing
first
thing
that
we
have
announced
before,
but
we
wanted
to
find
out
point
out
once
more.
We
have
deprecated
tipster
in
111
already,
and
the
deprecation
timeline
going
forward
is
that
we
are
going
to
be
removing
set
up
in
112
and
it's
going
to
be
completely
removed
in
113
yeah.
F
F
Then
other
work
that
is
going
on
right
now
is
reworking
node
metrics.
So
this
is
a
proposal
that,
among
others,
saw
us
working
on,
and
this
is
about
basically
reworking
how
all
the
metric
sources
work
that
come
together
in
a
node,
so
things
like
device
plug
in
but
also
container
metrics
and
all
of
those
kinds
of
things
that
they
have
a
unified
kind
of
expected
output
in
terms
of
metrics
and
there's
also
some
kind
of
some
things
about
metric
standardization
there.
In
terms
of
like
arts,
you
should
must
and
so
on.
F
So
if
you
work
on
note
anything
related
to
note
metrics,
then
you
will
want
to
look
at
this
proposal
and
the
next
thing
and
Sully
is
also
working
on.
This
is
a
large
refactoring
of
the
metric
server.
So
if
you
are
running,
maybe
a
non
production
environment
of
the
metrics,
where
you
are
using
the
metric
server
already
and
we
highly
encourage
trying
out
this
refactoring
in
theory,
this
should
make
metric
server,
actually
a
lot
more
stable.
But
of
course,
large
code
changes
can
have
a
risk.
So
if
you
have
the
best
possibility,
please
try.
F
The
next
thing
is
this
has
actually
already
merged,
but
there
is
now
advanced
an
advanced
mechanism
to
configure
the
Prometheus
adapter
for
the
custom
metrics
API
in
kubernetes.
So
previously
we
had
a
lot
of
expectations
of
how
your
Prometheus
server
must
be
labeling,
metrics,
an
order
for
for
them
to
be
able
to
be
used
by
the
metric
server
and
then
solve
a
lot
more
configurable
and
dynamic.
Now,
so,
basically,
you
can
configure
this
suite
to
your
environment
rather
than
having
to
configure
the
Prometheus
server
according
to
what
the
metrics
adapter
expected.
F
And
last
but
not
least,
we
have
been
disabling
a
number
of
end-to-end
tests
that
have
been
liking
a
lot,
and
these
are
all
only
tests
that
have
been
involving
for
party
services
where
the
third
party
service
was
basically
the
cause
of
these
liking
services,
and
they
are
now
behind
behind
feature
flex
or
in
the
testing
infrastructure
yeah.
That's
it.
Thank.