►
From YouTube: Kubernetes AWS Infrastructure - 20230406
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
oh
God,
and
welcome
to
yet
another
kubernetes
infra
meeting,
though
this
is
specific
to
our
AWS
infrastructure.
It
is
April
6th.
We
adhere
to
the
kubernetes
code
of
conduct
which
in
general
means
be
awesome
to
each
other
and
don't
be
a
jerk,
have
what
looks
like
a
very
fun
but
quick
agenda.
If
you
have
any
other
agenda,
items
put
them
down
in
open
discussion
and
then
we
can
nerd
out
about
it
there
before
anything
else.
A
Is
there
anyone
on
this
call
who
has
not
been
on
the
call
before
feel
free
to
pop
on
introduce
yourself
or
just
you
know,
introduce
yourself
in
chat,
but
do
not
feel
pressure.
You
could
just
sit
there
and
lurk
and
it
is
perfectly
fine.
A
All
right,
first
up,
updates
on
the
eks
build
cluster,
so.
B
Yeah
I
left
a
few
updates,
so
how
is
it
going
on?
Because
there
is
more
capacity
level
we
have
now
a
canary
cluster
or
we
have
the
time
from
infrastructure
needed
to
create
it,
and
we
have
been
successfully
creating
it,
but
also
destroying
it
so
that
we
can
collect
some
Media
stuff.
Like
high-end
permissions
and
stuff,
like
that,
that's
sent
to
me
pretty
stable
the
production
cluster.
We
I
think
that
we
have
seven
carry
each
of
the
total.
Most
of
them
are
pretty
much
green.
B
Some
of
them
tend
to
fail,
but
that
is
very
rare,
like
I.
Don't
consider
that
as
a
problem,
the
issue
that
we
have
was
also
we
are
now
using
AMD
based
instances,
so
we
switch
back
because
the
problem
with
flake
reset
we
had
before
was
to
the
fact
hardcore
candles
limits
and
c
groups
and
how
they
got.
Max,
prox,
environment
variable
and
core
variable
actually
works
so
yeah
we
basically
added
environment
variable
for
affected
jobs
and
they're,
not
flicked.
B
Yet
we
will
be
working
on
some
further
optimizations,
and
this
is
mostly
regarding
cost
optimization,
because
we
are
not
really
happy
how
much
that
costs,
because
Avis
forecasts
for
us
that
for
20
bucks
clusters
that
we
have
right
now
we
are
going
to
spend
21
000
this
amount.
So
we
want
to
a
little
bit
look
at
this
because
this
is
strong
like
this
is
not
a
big
cluster.
B
We
are
planning
on
scaling
it
even
more
so
yeah
that's
a
little
bit
bad,
but
this
is
the
something
that
we
will
be
focusing
before
kubecon,
since
there
is
end
of
this
week,
holidays
and
stuff,
like
that.
So
let's
hope
we'll
be
able
to
finish
this
part.
This
is
regarded
the
AKs
cluster.
Do
we
have
any
questions
or
not?
C
Yeah,
it's
small
and
not
a
question,
but
more
like
a
comment
like
I.
Don't
think
we
need
to
care
about
because
at
the
moment,
because
archives
will
be
very
really
low
this
year
compared
to
the
overall
budget.
We
have
a
three
million
budget
we
want
to
use,
and
currently
we
are
using.
C
We
we're
not
even
using
20
of
that
budget
and
right
now
was
what
is
costing
much
is
blob
layer,
distribution
from
the
controller
and
that
cost
gonna
be
down
when
we
add
more
region
in
the
future,
so
I
don't
want
us
to
be
driving
by
cost
optimization
right
now,
because
it
doesn't
really
affecting
us
at
the
moment
we
have
a
huge
version.
We
can
use,
let's
use
it
and
see
what's
happening.
A
Yeah
and
hopefully
I
don't
sound
like
death,
I
100
agree
with
you
or
know,
and
that
we
have
a
budget.
We
need
to
use
it.
But
another
thing
to
consider
is:
there
is
quite
a
lot
of
the
internal
Google
spend
that
we
want
to
try
and
move
over
once,
like
the
deal,
the
fastly
stuff
with
dl.kates.io.
That's,
we
never
had
to
worry
about
that
to
reduce
gcp
spend,
but
that's
still
something
that,
as
the
community,
we
need
to
try
and
get
under
our
umbrella.
A
I
bring
this
up
because
there's
a
whole
lot
of
other
Google
spend
that
we
aren't
seeing.
But
technically
we
need
to
consider
and
try
moving
over
you're
shaking
your
head,
I.
Think,
given
that
right
now
like
right
now
being
the
next
two
weeks,
there
is
kubernetes
release
immediately
into
kubecon.
There's
not
really
much
that
Marco
and
Co
can
do
with
the
build
cluster.
They
may
as
well
just
optimize
it
a
little
bit
in
the
next
two
weeks
because,
like
what
else,
what
else
are
they
going
to
do
with
the
build
cluster?
A
B
I
would
like
to
add
one
thing
to
this
very
good
point
from
Jiffy
this
time
before,
until
after
kubecon
is
basically
the
only
time
we
can
break
the
cluster
because
there
are
no
jobs,
only
there
are
only
10ary
jobs
that
we
don't
care
about
that
once
we
start
adding
more
jobs,
then
we
really
need
to
trade
this
clusters,
but
actually
they
can't
break
it
as
often
and
now
we
can
experiment
a
little
bit
to
see
what
can
give
better
results
so
yeah.
B
C
C
A
Having
talked
with
like
AWS
like
Jim's
skip
level
like
Barry
cooks
that
dude
we
were
on
a
meeting
with
it
sounds
like
before
the
whole
holy.
We
need
to
burn
AWS
credits
like
mad,
otherwise
they
will
not
give
them
back
to
us.
That
is
not
nearly
as
urgent
as
it
was
like
six
months
ago.
We
have
a
very
good
understanding
with
them.
They
will,
even
if
we
burned
like
half,
we
will
still
get
them
back.
A
A
C
I
do
want
to
trick
me,
no
I,
think
it's
fine,
I,
think
the
the
my
that's
my
concerning.
We
should
not
overthink
musician
right
now.
We
are
basically
not
in
the
position
of
doing
that.
We
should
basically
spin
up
things
and
see
what's
happening.
That's
the
one
thing
I'm,
let's
not
drive
our
technical
decision
based
on
cost.
That's
what
I'm
saying
you
should
basically
be
focused
and
say:
okay,
we
have
scalability
job.
We
want
to
run
a
little
bit.
We
want
to
move
some
CI
jobs.
Let's
just
focus
on
that.
A
Do
you
do
you
think
it
is
okay,
though,
for
the
next
couple
weeks
for
them
and
I'm,
not
even
talking
about
like
cost
optimizations
I'm
talking
about
like
optimizations
in
general,
because
this
is
this-
is
a
holding
period
for
the
next
two
two
and
a
half
weeks
right
like
what
they
can
do
in
the
next
two,
two
and
a
half
weeks,
great
and
then
again,
Post
Release,
post
test
freeze
gets
lifted
post
kubecon.
Now
we
can
start
moving
jobs
on
there
and
that's
when
that
cluster
cannot
be
messed
with
too
much.
At
that
point,.
C
Yeah
I'm
not
gonna,
like
my
answer
wow,
so
the
one
thing
we
need
to
do
is
do
an
inventory
of
jobs
illegible
to
migration,
because
not
everyone
can
be
migrated
and
that's
like
going
tipping.
The
testing
for
people
see
look
at
Job
specification
and
basically
look
at
all
the
annotation
and
see
which
job
are
eligible
to
migration.
C
C
C
D
C
Yep,
okay,
again,
we
can
add
like
except
exception
to
that.
Overall,
it's
basically
anything
related
to
chaos
is
not
eligible
to
migration.
E
A
C
C
A
Was
just
gonna
ask
if
you
could
write
out
the
the
tests
that
are
just
straight
up
not
going
to
get
migrated.
D
C
B
Kenny
Richards
I
think
Henry
jobs
make
sense
for
stuff
like
where
we
do
something
specific
like
when
I
saw
me
three
tests
or
something
like
that,
but
do
we
really
want
to
go
with
everything
for
Canary
jobs,
because
I
can
think
of
some
jobs
in
a
lot
of
sub
projects
that
are
basically
stuff
like
build
linked,
run,
various
verification
that
can
be
slightly
migrated,
that
is
99
percent
going
to
work.
So
do
we
like
wants
to
go
with
Canary
for
everything,
or
do
we
want
to
like
YOLO,
just
migrate
them
and
see?
A
C
Okay,
I
don't
have
a
strong
opinion
to
this,
so
I'm
gonna
open
the
question
to
the
world.
What
do
you
think
about
because
I
want
to
hear
your
opinion
about
this
like
so
we
have
I'm
going
to
refresh
the
question
for
Michael
questions
like
some
of
the
pre-summit
project.
We
have
I've
just
basically
linked
doing
links
for
dependency,
verification
and
stuff.
Really
like
that.
Do
we
want
to
kind
of
reach
out
to
the
EPS
concept
before
we
migrate
or
just
do
the
migration
I,
don't
have
a
strong
opinion.
C
B
I
kind
of
related
to
the
point
it
was
up
for
discussion
like
it
is
going
to
be
ready
at
the
point
we
say:
okay,
we
freeze
it
like.
Let's
go,
do
any
changes
because
it
is
stable.
If
you
take
a
look
like
it
is
working
pretty
well
and
if
you
say:
okay,
let's
not
focus
on
optimizations
for
now,
which
I
still
kind
of
disagree,
but
let's
start
moving
stuff,
then
yeah.
It
is
stable.
B
I
think
I
didn't
notice
any
problems,
the
jobs
that
are
running
there
are
pretty
solid,
like
other
than
the
optimization
changes
that
we
might
want
to
try.
I
will
see
that
we
have
anything
else
to
do
it
on
it.
Besides,
especially
promoting
boss
scores,
especially
playing
a
little
bit
around
with
monitoring,
but
that
all
first
can
be
done
on
a
canary
cluster
and
then
we
can
promote
after
we
assure
the
production,
Okay.
C
E
C
Yeah,
oh,
not
targeted.
E
C
C
E
Already
got
a
diverse
set
of
canary
jobs,
which
I
hope
you
did
then
I
would
say,
start
putting
presents
there
after
the
good
freeze.
D
E
Can
just
revert
it
back
into
the
cluster.
A
I
yeah
and
I
would
also
argue,
since
we
are
again
copy
pasting,
the
jobs
that
we
copy
paste
are
the
canaries
because
like
if,
if
those
fail,
if
those
start
immediately
failing
or
flaking
on
eks,
but
they're
fine
in
gcp.
Well,
then
we
know
that
there's
a
problem
with
the
eks
cluster
somehow
and
then
we
have
to
go
debug.
It.
A
C
Okay,
yeah
I,
think
I
think
for
this.
For
this
subject.
We
don't,
let's
not
have
a
to
be
too
strong
about
this.
Let's
be
flexible
and
see
what's
happening,
my
opinion
is
we
I
think
my
uppers
will
be
very
really
simple:
free
Summits,
not
in
not
targeting
the
KQ,
but
we
can
just
migrate
them
and
the
rest
we
can
figure
out
things.
I
will
I
will
basically
leave
people
under
micro
feel
free
to
do
whatever
you
want.
If
you
think
it's
safe,
I,
don't
have
a
strong
opinion
about
this.
B
If
we
do
migration
like
we,
don't
have
copy
paste,
we
just
put
that
cluster
AKs
on
the
existing
jobs
and
then
it
works
on
the
new
cost.
So
this
is
the
only
qualification
that
I
want
to
make
sure
that
we
are
on
the
same
page.
I
got
it
that.
B
Okay,
yeah:
let's
see,
let's
see
about
that,
and
we
can
probably
discuss
that
Asic
as
well.
So
yeah
do
we
have
any
other
questions
for
AKs
cluster
eks.
B
Okay
and
yeah
I
will
go
to
the
next
topic.
This
is
Bosco's,
so
what's
the
status
of
Buscus
is
that
we
have
some
idea
about
it,
how
it
works
and
I
think
that
we
are
intercepted
about
to
eat
and
I
want
to
thanks,
especially
dims
and
Ben
I,
think
they're,
not
here
today,
but
they
did
some
amazing
job
explaining
to
part
Academy
how
everything
works.
B
B
Okay,
so
I
spoke
to
this
told
me
to
reach
out
to
riyan,
to
request,
accounts
and
I
had
some
discussions
between,
but
then
Rhian
told
me
that
deems
told
him
to
that.
We
need
to
create
accounts
via
terraform.
We
can't
do
that
via
AVS
directory,
and
this
is
where
the
thing
starts.
So.
B
E
B
Have
several
videos
of
this?
My
opinion
is
that
you
should
probably
go
and
create
them
as
we
can
like,
even
if
it
is
manual
and
just
get
what's
concerning
when
I'm
talking
about
accounts.
B
This
is
we
need
one
account
for
Canary
cluster
and
we
need
another
account
for
cluster
API
AVS
test
I
requested
10
of
them,
since
that
is
how
many
accounts
we
have
for
main
boss
processors,
but
we
should
probably
try
to
get
those
accounts
tested
and
then
eventually
see
how
we
can
do
it
with
terraform,
especially
because
at
this
point
we
don't
have
hippie
hippies
on
vacation
until
after
kubecon,
so
I
don't
really
want
to
block
us
on
accounts.
B
C
Oh
okay,
okay,
there's
like
they
have
that's
a
lot
of
back
and
forth
between
people
and
I
was
hoping
somewhere
else
we
take.
We
take
care
of
this
except
me,
so
I
will
create
those
accounts.
Not
now,
but
at
least
we
can
start
with
one
account.
I
think
that's!
Okay,
okay,.
B
C
E
E
Obviously
I
have
an
IM
user,
I'll
create
one
for
you
and
then
you
can
create
one
for
I.
Believe
it's
Patrick
right,
yeah
all
right,
so
you
can
do
that
for
him,
but
this
also
ties
into
another
item
that
has
been
dragging
on
for
a
very
long
time
and
I'm
hoping
GPS
some
updates
about
it.
C
We
were
all
being
kept
on,
so
I.
Think
in-person
conversation
will
happen
about
the
Assumption
and
we'll
see
what's
happening
from
that.
E
But
I'm
gonna
leave
it
with
GP
and
honor
I.
Don't
know
the
ask
is
quite
straightforward:
really
I
need
an
Azure
ID
tenant
and
then
we
can
set
this
up.
E
Things
get
very
funny,
but
that's
a
GP
problem.
Yeah.
C
So
and
let's
I
think
we
need
to
reset
the
expectation
about
this.
This
might
take
a
long
time
for
anyone.
They're
called
expecting
this.
This
might
take
a
long
time
because
it's
a
conversation
between
Enterprises
and
it's
always
get
complicated.
So
in
the
meantime,
we
have
to
deal
with
what
we
have
are
we.
E
Trying
to
interim
actually
in
the
interim,
if
we
can
use,
we
can
use
AWS
SSO
to
create
users
and
groups
and
then
deal
with
it
that
way
that
will
solve
our
problems
for
a
while
I
believe
yeah.
C
E
C
C
B
Yeah
but
I
have
to
get
back
to
the
Boston
topic,
because
do
you
first
want
to
finish
with
this
topic?
Can
we
get
back
or
do
about
to
get
back
out,
I.
C
A
The
last
thing
that
I
will
say
about
the
Azure
ID
thing:
the
biggest
roadblock
is
actually
not
getting
the
SSO
stuff
getting
getting
funding,
getting
credits
whatever
it's
actually
getting
an
Azure
account,
because
the
cncf
does
not
have
an
Azure
account.
We
have
a
gcp
account.
We
have
AWS
account,
but
we
can't
I
cannot
like
me
as
GFI
just
go,
create
an
Azure
account.
That
would
be
bad.
E
So
I
don't
see
what's
bad
about
that
like
Enterprises.
Do
that
all
the
time.
A
That
ain't
good
sure
some
Enterprises
and
companies
do
that,
but
like
because
this
because
of
the
cncf
and
the
fact
that
is
not
just
a
non-profit,
but
it's
a
software
Foundation
that
owns
all
the
trademarks
and
all
the
stuff
around
projects.
Technically
legal
stuff
is
like
the
thing
we
do
not
mess
with
we're.
Talking,
I
can
go
and
then
sign
up
for
like
todoist
or
some
small
thing
on
behalf
of
me,
and
the
cncf
would
reimburse
it,
but
the
minute
that
I
have
to
sign
something
and
it
is
for
the
whole
of
the
foundation.
C
C
A
And
let
me
also
make
it
clear,
even
though,
probably
for
the
next
month,
I
am
not
going
to
really
talk
about
Azure,
because
I
just
won't
have
updates.
That
does
not
mean
there's
not
emails,
flying
around
back
and
forth
trying
to
get
this
going,
but
the
other
thing
is
does
the
season.
This
is
another
thing
that,
like
you,
don't
even
see
happening
unless
you're
on
the
inside.
Does
the
cncf
own
this
account
or
is
it
an
LF
owned
account?
Is
this
something
that
we
want
lfit
to
manage
versus
the
cncf?
A
A
E
A
C
C
Okay,
we
don't
have
a
subject,
so
I
will
give
you
back
20,
like
90
minutes
of
your
time.
E
Oh
I
forgot
something:
Patrick
was
supposed
to
work
on
Atlantis
automation.
Did
you
manage
to
sort
that
one
out
or
is
that
still
in
progress
somewhere.
D
Yeah
I
mean
I
can
give
some
updates,
so
basically,
I
was
playing
out
with
it.
I
even
have
some
draft
proposal
somewhere,
but
I
yeah
I
waited
till
the
whole
register.
Thing
is
over
so
that
more
people
could
look
at
it.
D
So
yeah
I
mean,
if
you
are
interested
I,
can
share
this
on
the
on
the
channel,
but
yeah
I
mean
I
have
some
concerns,
mainly
the
security
concerns,
as
the
project
is
really
nice,
but
it's
not
meant
for
public
repos
I
mean
it
is
possible,
but
first
I
would
rather
like
clean
up
our
terraform
structure
and
so
on,
but
I
mean
I
can
show
the
the
dog
and
we
can
discuss
that's
what
we
want,
but
yeah
thanks
for
reminding.
E
All
right
sounds
good,
open,
a
PR
and
bring
the
dock
along,
and
then
we
can
talk
about
it.
I
am
interested
in
seeing
what
it
looks
like,
because
I
want
to
adopt
it
for
another
CNC
project
and
are
not
gets
to
do
less
manual
work
on
his
computer.
So
we
can
get
changes
ship
that
quicker.
C
Just
to
be
clear
priority
to
Bosco's
and
proud
yeah
I
agree
yeah,
that's
basically
I
mean
we
need
to
make
sure
we
are
in
a
position
where
we
fully
use
AWS
as
part
of
power
as
part
of
the
test
infrastructure.
Before
we
even
think
about
optimizing
and
terraform
execution.
A
Oh
actually,
this
spurred
another
agenda
item
on
the
top
that
I
wanted
to
bring
up.
Does
everyone
remember
or
vaguely
know
of
that
shiny,
interesting
timeline
that
got
shared
around?
That
was
from
the
cncf?
We.
A
That
and
it
will
be
a
lot
more
reflective
of
a
what
we've
done
and
B
in
general.
What
the
community
and
whatnot
is
focused
on
doing
I
will
distribute
that,
once
we
have
a
new
version,
I'm
just
letting
you
know
that
I'm
distilling.
All
of
the
conversations
that
we
have
been
having
and
trying
to
make
a
more
realistic
timeline
than
what
was
originally
distributed
back
in,
like
January.
D
A
B
C
I
invite
everyone
to
check
page
two
of
the
public
billing
report.
You
will
see
the
Delta
between
I'm
gonna
share
that
right
now,
if,
if
you
want
to
show
whatever
anyone
want
to
show
that
I
invite
everyone
to
basically
check
page
to
off
to
gcp
report,
and
you
will
see
basically
what
Justin's
talked
about.