►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
looks
like
today
we
have
like
a
very
one-on-one
meeting
like
me,
myself,
carlos,
and
I
have
enough.
I
if
I
don't
know
if
I
pronounce
your
name
correctly.
A
Okay,
cool
and
just
reminder
that
this
is
a
meeting
that
is
under
the
kubernetes
community
under
the
cncf
code
of
conduct
in
general.
This
is
just
be
nice
to
each
other.
I'm
gonna
share
my
screen
and
post
in
the
chat,
the
link
for
the
agenda.
A
A
A
The
cluster
api
itself
already
released
the
version
1.1
and
in
our
project.
We
didn't
release
that.
Yet
I'm
proposing
like
to
recreate
the
branch
of
the
release
1.1
and
make
the
make
out
the
unnecessary
jobs
in
testing
fry
and
then
release
1.1.
A
I
guess
this
is
a
good
thing
to
do
like
I
was
checking
the
the
other
providers
they
already
released,
1.1
the
only
thing
we
need
to
do
here,
I'm
gonna
check
or
if
you
want
to
do
that
as
well
or
not
like
check
if
our
dependencies
for
cluster
apis
is
running
on
their
latest
one
as
the
class
api
release
it.
If
it
is
like
we
can
do
a
check
right
now.
Actually,
let's
do
that.
B
Are
we
using
the
test
package
anyway
in
the
gcp
provider,
because
I
know
I
noticed
that
I
was
actually
working
on
one
of
the
commands
in
the
kappa
repository
right
and
while
I
was
working,
I
noticed
that
there
was
some
go
mod
changes
and
I
noticed
there
was
a
new
version.
Apparently
it
was
like
1.3
in
the
test
package.
I'm
not
sure
if.
A
A
B
A
The
test
refresh
jobs
for
the
release,
branch
1.1
and
then
prepare
the
release
for
the
cluster
api
provided
gcp,
and
then
we
open
the
master
branch,
the
main
branch
for
like
upgrading
for
the
next
release
of
cluster
api,
which
is
1.2.
I
guess-
and
I
think
they
I'm
not
sure
if
they
are
read
up
nor
they
didn't
upgrade
to
go
18.
Yet
I
think
size,
maybe
like
kkk
kubernetes
kubernetes,
that
the
main
branch
is
using
go
1.18.
A
B
B
A
Like
the
cluster
api
provides
the
features
that
the
providers
implement
right
and
we
always
we
are.
We
are
always
in
sync
with
the
cluster
api,
because
also
cluster
api
provides
the
cluster
api
ctl,
the
tool
that
you
can
use
to
interact
with
the
the
clusters
with
the
management
cluster,
and
for
that
like,
if
you
are
using
a
version
for
example,
1.0
you
can
you
most
likely.
You
cannot
use
the
cluster
api
provider,
that
is
on
1.2,
for
example,
then
we
always
need
to
be
in
match.
B
I
see
I
remember
we
had
like
richard
and
I
and
one
of
the
fellow
we
were
having
a
discussion
about
gsoc
projects
for
kappa
right
right
and
during
moving
itself.
Richard
had
actually
explained
a
lot
about
the
internal
workings
of
kappa.
Like
what
role
do
the
controllers
themselves
do
when
it
comes
to
like
deploying
different
resources
on
aws
itself?
So
I'm
assuming
like
a
similar
kind
of
a
process,
goes
on
in
the
gcp
provider
as
well
right,
correct.
A
Yeah,
all
the
providers
are
pretty
similar
like
copy
aws
copy,
azure,
crappy,
gcp,
digitalocean
and
others
they
are
pretty
similar.
The
only
thing
that's
gonna
like
make
the
difference
is
the
api
calls
right
to
the
pro
to
the
cloud
providers.
That
was
is
the
different
ones
and
the
way
we
interact
with
the
the
cloud
provider
right,
but
in
a
high
level,
almost
everything
should
be
almost
the
same.
The
same
types
of
controllers,
the
same
types
of.
B
That
sounds
cool,
maybe
if,
since
I
I
kind
of
get
the
idea
that
there's
actually
a
lot
of
work
remaining
in
the
gcp
provider
itself
as
compared
to
the
other
major
providers
like
azure
or
aws
itself
right.
So
there
are
some
issues
that
I
can
start
working
on
right
away
as
a
beginner,
maybe
or
even
if
there's
any
like
medium
level,
issues
that
after
some
documentations
going
through
some
documentations
and
some
technical
sources
that
maybe
I
can
start
working
on.
A
Sounds
good,
I
think
that
is
like
a
few
issues
open
in
the
in
the
project.
You
can
take
a
look
on
that
and
then
maybe
you
can
posted
the
channel
in
the
slack
channel.
If
you
want
to
work
in
someone
like
post
the
issue
like
if
you
then,
if
you
have
any
questions,
you
can
ask
the
channel
as
like,
I
can
answer
we
nee
richard
and
others.
Maybe
in
the
community
also
can
help.
A
But
that's
nice
to
have
you
like
on
board.
B
Yeah,
that
sounds
great
specifically.
I
have
a
question
about
like.
Are
there
any
issues
or
are
there
any
plans
for
any
implementations
in
the
ci
jobs
themselves?
For
the
gcp.
B
For
example,
right
now,
the
project
that
I'm
working
on
in
lfx
for
kappa
involves
automating
the
ami
build
test
and
publish
pipelines
using
the
github
api,
as
well
as
the
pro
jobs.
So
that's
the
project
that
I'm
currently
taking
for
kappa.
B
Probably
interrupt
it,
it's
basically
since
aws
has
amazon
machine
images
right
yeah,
so
that
process
currently
takes
place
pretty
much
manually.
So
right.
B
Has
the
responsibility
to
to
publish
the
emis?
They
just
have
to
run
the
script
on
their
system,
and
then
it
takes
quite
a
while
to
like
pretty
much
publish
all
the
mis
through
all
the
regions
and
for
the
specific
oss
that
have
to
like
pretty
much
implement
in
that
right.
B
A
For
yeah
for
gcp,
in
this
case,
we
already
have
a
job
that
is
running
nightly,
that
generates
the
images
and
publish
those
images
for
all
the
the
active
kubernetes
releases
and
but
I'm
not
sure
about
aws,
but
we
don't
have
like
for
now
like.
We
are
publishing
those
images
in
a
gcp
project
and
then
that
those
images
are
public.
Anyone
can
consume
those,
but
I'm
not
sure
like.
A
I
need
to
speak
with
the
people
in
aws
to
see
how
they
are
doing,
because
I
think
we
don't
have
like
a
dedicated
project
to
publish
the
the
image
for
gcp
like
as
official
one
usually
who
is
who
is
running
cluster
api
for
their
like
needs.
They
need
to
build
the
images
and
then
publish
those
images
in
their
projects.
A
It's
different
a
little
bit
different
from
aws,
and
I
guess
for
microsoft,
but
this
is
something
that
we
like.
I
would
say
if
you
we
open
an
issue
and
asking
like
to
have
a
let's
say
our
official
publish
images
for
gcp
class
api
gcp.
I
think
we
can
work
on
that
for
sure
the
jobs
already
exist
like
it's
running
nightly
every
every
every
day.
A
A
The
images
like,
for
example,
for
1
18,
18,
1,
18
20,
and
then
we
have
118
23.3,
that's
22.6
and
so
on.
If
we
want
to
like
have
like
a
our
official
one
that
is
released,
we
can
just
change
this
job
like
could
create
another
one
to
publish
and
copy
to
the
correct
place.
A
But
for
that
we
need
to
discuss
with
the
cluster
api
maintainers
and
maybe
with
the
is
someone
in
the
kubernetes
to
we
see
if
we
can
push
those
emails
to
the
production
bucket
or
to
another
place.
This
that's
some
discussions
we
need
to
to
have
before
we
do
that.
It's
it's
a
little
bit
different
from
aws,
because
maybe
the
blast
is
supporting
those
things
and
for
asia
as
well.
But
in
this
case
here
we
need
to
check
that's
a
good.
B
So
I'm
a
bit
curious
about
this
job
in
itself.
Was
it
provided
by
the
image
builder
team
or
like
was
it
created
by
the
people
working
on.
A
B
So
how
did
you
actually
manage
the
credentials
because
I
believe
the
credentials
themselves,
I'm
not
sure.
If,
for
the
google
provider,
it
is
managed
in
a
different
manner,
but
in
case
of
aws,
I
think
they
lease
the
images
they
release
the
credentials
using
their
internal
like
implementation,
such
as
like
bosco's
and.
A
Yeah
yeah
yeah
for
aws
that
is
bosco's
that
like
allow
their
in
this
case
as
well
in
the
in
the
gcp
we
use
bosco's
as
well
like
to
provide
the
the
keys
but
sites
we
are
using
the
staging
bug
the
staging
project
we
have
like
there's
another
specific
case.
We
pass
in
the
pro
job.
B
Okay,
so
the
place
where
all
these
images
are
being
published,
whichever
account
is
it,
is
it
a
cncf
provided
account,
or
is
it
like
a
third
party
of
them.
A
The
this
project
here
for
this
year
this
is
a
like
a
community
kubernetes
community
project.
This
is
under
the
cncf
account,
but
this
is
like
the
same
for
the
others
like
there's
a
staging
there's
a
cate
staging
kubernetes,
one
that
we
release
kubernetes,
that's
another!
That's
all
the
stations
are
the
accounts
that
we
push,
binaries
or
images
or
anything
for
ice
station
proposed.
That's
gonna
be
promoted
to
production.
A
B
I
see
so
these
must
be
like
the
credentials
must
be
fixed
in
this
case,
instead
of
being
randomly
for,
like
yeah,
for
example,
for
best
purposes,
I
guess
they
might
be
randomized,
sometimes
just.
A
Testing
yeah,
indeed,
for
this
job
specific,
we
use
degradations
that
allow
to
push
for
this
specific
project.
B
Okay,
so
for
so,
these
credentials
are
actually
like
as
an
environment.
Variable
these
fixed
credentials
are
mentioned
within
the
container
itself
or
like
do
we
have
to
this.
A
If
you
need
like,
for
example,
if
you
need
to
create
a
new
set
of
credentials,
there
is
a
process
in
the
testing
for
a
repository
that
you
can
check
that.
You
need
to
open
some
pr's
to
add
the
credentials
and
talk
to
some
persons
that
have
the
permissions
to
create
the
secret
inside
the
the
cluster.
A
Yeah
yeah
there's
a
lot
of
like
infrastructure
before
you
have
the
actual
job
that
needs
to
be
done
before,
like
which
is
create
a
secret
check.
If
the
secret
is
working
like
upload,
the
secret
to
the
cluster,
create
the
secret
in
the
in
the
pro
world
and
make
that
pro
job
and
allow
to
to
use
that
secret
to
to
inject
it
when
it
runs
in
your
job,
there's
a
lot
of
things
that
needs
to
be
done
beforehand
to
to
after
you
create
the
job
itself.
B
I
see
I
see
it
yeah
I
get
it
because
I
think
right
now
we
are
using
the
heptio
account
for
publishing
the
images,
so
it's
owned
by
vmware,
apparently,
and
that's
something
that
we
were
working
on,
because
my
task
actually
involves
some
of
the
things
that
where
we
might
require
the
credentials
for
those
for
the
account
on
which
we
have
to
publish
the
emails
right,
but
in
case
of
the
current
system,
the
current
way
we
manage
credentials,
at
least
for
brow.
B
I
think
there
is
no
way
there
is
no
safe
way
as
of
now
to
use
our
credentials
publicly.
So
I
think,
as
you
mentioned,
the
testing
fronting
we,
we
might
need
to
communicate
with
the
testing.