►
Description
Meeting notes https://docs.google.com/document/d/1ushaVqAKYnZ2VN_aa3GyKlS4kEd6bSug13xaXOakAQI/edit#
A
Welcome
everyone
today
is
march
the
9th
2022,
and
this
is
the
cluster
api
project.
Meeting
cluster
api
is
a
sub-project
of
kubernetes
cluster
life
cycle
and,
as
such,
we
adhere
to
the
kubernetes
community
standards.
So,
if
you'd
like
to
talk,
please
raise
your
hand,
and
I
will
call
on
you
and
in
general
treat
everyone
as
you
might
be,
expect
to
be
treated,
so
you
know
be
kind.
So
to
begin
with,
we'll
go
through
the
open
proposal,
readouts
and
I'll
give
a
chance
here.
A
If
anyone
would
like
to
talk
about
any
of
these,
please
raise
your
hand
and
I'll
call
on
you.
B
Hey
folks,
so
I
just
wanted
to
bring
up
the
ipam
integration
proposal,
which
is
number
three
on
the
list.
B
It's
been
open
for
a
while,
and
there
have
been
a
few
comments
and
the
author
has
been
actively
addressing
those
comments
I
would
like
to
know-
or
I
would
like
to
see
if,
if
like
reviewers
and
other
maintainers,
could
take
a
look
at
this
and
see
if
this
is
something
that
is
in
a
logical
state
so
that
we
can
move
forward
with
it.
I
personally
work
with
cappy,
and
this
is
this-
is
something
that
we
do
get
asked
a
lot
about
this.
B
This
might
not
be
very
useful
for
the
public
crowd
providers,
but
for
running
on-prem
or
not
like
basically
for
creating
clusters
in
on-prem
environments.
This
is
like
a
must-have
feature,
I
would
say
so
yeah
just
just
if
folks
would
be
kind
enough
to
kind
of
take
a
look
at
the
proposal
and
give
their
feedback.
Then.
A
C
C
It
seems
to
me
that
we
are
quickly
getting
to
a
state
where
we
can
call
out
sign
deadline.
There
are
some
comments,
but
they
are
just
needs.
So
I
would
like
to
to
invite
people
to
take
another
look
and,
and
then,
if
possibly
next
week,
we
will
start
at
the
silent
to
have
a
deadline
for
approval.
C
And
and
then,
with
regards
to
other
proposal,
I've,
there
is
a
document
about
supporting
a
cluster
of
donald
orchestration.
I
have
worked
on
them
on
the
motivation
part,
and
this
is
the
best
effort
work
for
me.
But
if
someone
wants
to
take
a
look
at
it,
I've
just
broke
down
the
motivation,
and
also
there
is
another
document
which
is
not
yet
a
proposal,
but
lincoln
in
the
main
dock,
which
is
about
status
of
condition.
C
So
there
are
a
set
of
nice
pr
from
most
from
alberto
working
on
condition.
What
I'm
trying
to
do
is
is
basically
I'm
trying
to
make
a
summary
of
all
the
condition
that
we
have
on
on
cluster
api
check
if
they
are
consistent,
and
so
we
can,
I
can
give
a
better
feedback
to
alberto
and
and
make
sure
that
we
are
going
in
a
consistent
direction.
If
someone
is
interested
in
this
work,
which
is
about
observability,
please
take
a
look
and
give
feedback.
It
is
not
a
proposal,
it
is
just
a
status
update.
A
A
Okay,
so
any
other
open
proposal
information.
Would
anyone
else
like
to
add
a
comment.
A
Okay,
I'm
not
seeing
any
hands
going
up
so
now
we'll
take
a
little
chance
to
welcome
new
attendees.
So
usually,
this
community
has
been
very
vibrant
over
the
last
couple
years
and
we've
had
a
lot
of
people
joining
us.
So
we
like
to
give
a
little
chance
for
new
community
members
to
introduce
themselves
and
maybe
say
a
little
something.
So
if
you'd
like
to,
please
raise
your
hand
and
or
just
unmute
and
say,
hey,
introduce
yourself
to
the
group.
D
Hey
guys,
I
don't
know
if
you
can
listen
to
me.
D
Yes,
so
I'm
quite
new
to
open
source.
I
am
working
with
cluster
api
in
my
where
I
work
in
my
company,
so
I
kind
of
got
interested
on
like
participating
in
like
discussions
and
understand
the
future
behind
open
source
community.
So
this
is
kind
of
really
new
to
me.
I
I'm
really
don't
understand
all
what's
going
on,
but
I'm
really
hooked
by
the
open
source
idea.
Overall.
Okay,
so
that's
it!
I
guess.
A
E
Hi
everyone,
my
name's
david
bloom,
I'm
been
doing
kubernetes
for
a
while,
but
new
to
this
particular
sig.
I
have
the
pleasure
of
working
professionally
with
chris
nova
and
we're
doing
a
lot
of
cluster
api
and
cluster
api
amazon
providing
work
right
now
in-house,
and
this
has
been
a
really
awesome
tool.
I'm
really
excited
using
it.
E
We
just
had
our
first
multi-account
stand-up
success
yesterday,
so
we're
able
to
provision
from
one
management
into
a
target,
so
that
was
that
was
huge
for
us
learned
a
lot
along
the
way
and
have
maybe
some
contributions
back
to
guides
and
documentation
which
will
get
up
there
soon
for
the
things
I
found.
But
it's
been
awesome.
Super
great
tool.
Super
great
community
so
really
excited
to
be
here
and
help
out
and
learn
more.
A
A
Okay,
I'm
not
seeing
any
more
hands
raising.
So,
let's
move
on
to
the
discussion
topics
fabrizio
you're,
first
up
with
some
monthly
patch
releases.
C
Yeah
yeah
yesterday
we
created
monthly
bench
press
release
for
1.1
1.0
n04
and
thank
you
to
all
the
contributors.
A
Awesome
yeah
thanks
to
everyone.
So
next
up,
we
we
haven't
had
a
demo
in
a
few
sessions.
Stefan
you've
got
a
demo
for
us,
so
I
will.
F
Stop
posting,
and
hopefully
someone
comes
on.
A
C
As
you
know,
we
are
always
looking
for
new
reviewer
and
maintainers
for
help,
and
we
have
two
pr
for
people
stepping
up
kilian
and
yuvaraji,
who
are
stepping
up
for
bootstrap
and
pastor
cattle.
Please
go
and
and
provide
your
feedback,
and
also
I
want
to
give
a
reminder
that
we
have
a
contributor
ladder.
We
are
welcoming
everyone
that
is
willing
to
help
the
code
base
is
divided
in
in
in
areas
with
separated
owner
files.
C
Sorry
one
last
note:
while
doing
this,
I
noticed
that
there
are,
we
don't
have
owner
files
for
the
caster
resource
set
so
for
addons
and
for
machine
pools.
We
don't
have
separated
on
our
file,
even
that
there
is
a
lot
of
things
going
on
on
this
topic.
Maybe
we
can
consider
to
add
separated
on
a
file
only
for
this
part.
So
if
you
are
interesting,
please
reach
out.
A
Yeah
sounds
good
I'll,
make
a
note
here
too.
So
that's
also
also
cluster
resource
sets
right.
A
Okay,
I
guess
fabrizio
you've
got
the
next
one.
Looking
for
more
volunteers,.
C
Okay,
so
last
week,
one
pr
which
is
for
five
one
four
basically
got
closed
by
the
book.
Sorry
by
the
by
the
bot
and
this
pr
is-
was
about
documenting
a
set
of
guidelines
for
providers
to
implement
multi-tenancy
in
a
consistent
way.
A
Okay,
now
we're
on
to
the
demo
section
richard
and
claudia,
would
you
like
to
take
it
away
with
a
demo
of
the
cluster
api
provider,
micro,
vm.
G
Yeah
sure
who
should
we
make
host
there?
I
guess
if
you
can
make
me
richard
first
and
then
I
can
just
quickly
pass
it
over
to
claudia
to
do
the
actual
demo.
G
Brennan,
thank
you.
So
I
will
just
share
a
screen,
so
I'm
just
gonna
cover
the
background
very,
very
quickly
and
then
claudia's
gonna
do
the
do.
You
know
the
real
demo,
which
would
be
the
most
interesting
part,
so
we've
been
building
a
new
capture
provider,
cluster
api
provider,
micro
vm,
as
the
name
suggests
it's
about
provisioning,
kubernetes
clusters
using
lightweight
virtualization
and
specifically
firecracker
so
yeah,
so
we
use
firecracker
for
the
virtualization.
G
Well,
we
do
that
because
firecracker
is
is
very
quick.
It's
designed
with
security
in
mind,
and
it
underpins
some
some
services
aws
like
like
lambda
and
fargate,
that
are
use
that
massive
scale.
So
just
I'm
going
to
whiz
through
this
really
really
quickly.
So
we
when
we,
when
we
talk
about
cap
mvm,
is
the
acronym
that
we've
given
it.
We
talk
about
two
locations,
you
know
just
you
know
general.
G
So
we
have
a
magic
location,
an
edge
location,
so
for
us
edge
locations
start
with
some
sort
of
bare
metal
provisioner.
We
use
tinkerbell,
but
it
you
know,
pick
your
poison.
In
this
respect.
We
plug
some
bare
metal
machines
in
somewhere
and
the
bare
metal
provisioner
takes
over
and
provisions
those
bare
metal
machines.
G
It
basically
puts
three
things
onto
those
bare
metal
machines,
container
d,
firecracker
and
something
called
flintlock.
Flintlock
is
basically
an
api
server,
something
that
we've
built,
it's
open
source
that
you
can
call,
and
it
will
create
micro
vms.
G
So
this
you
repeat
this
on
any
number
of
bare
metal
machines
and
they
basically
sit
there
doing
nothing.
So
this
is
where
the
the
cappy
side
of
things
comes
in.
So
we
have
a
management
cluster
and
we
say
we
want
to
create
a
micro
vm
based
kubernetes
cluster,
and
that
starts
with
the
cluster
definition.
So
we
have
micro,
vm,
cluster
micro,
vm
machine,
essentially
in
in
there,
and
we
say
we
want
to
create
a
cluster
based
on
with
five
nodes
in
micro,
vms,
so
that's
applied
to
the
measurement
cluster.
G
You
can
statically
say
this
is
my
these:
are
my
hosts
iterate
through
them
or
and
place
them
across
those,
but
we
are
also
working
on
a
scheduler
so
that
that
can
actually
interrogate
the
bare
metal
provision
and
say
what
bare
metal
machines
do
I
have
and
then
likewise,
you
know
what
is
the
usage
of
those
machines
at
the
moment
we
are
only
doing
the
static
placement,
and
this
is
when,
after
it
knows
where
to
place
those
micro
vms,
the
normal
capi
process
then
starts.
G
It
does
some
stuff
to
to
massage
those
and
make
make
them
available
in
a
way
that
is
suitable
for
firecracker,
and
then
it
basically
starts
instances
of
firecracker
which
create
micro,
vms
and
those
micro
vms
then
become
individual
kubernetes
nodes,
and
this
is
repeated
many
times
on
the
same
host
or
on
different
posts.
When
a
cluster
is
formed,
so
I
will
probably
pause
there.
I
have,
I
have
some
slides
on
what
is
firecracker,
but
it's
probably
better
just
go
to
the
demo.
I
think
which
claudia
has.
H
All
right
can
everyone
see
my
screen
yep
see
four
terminal
windows.
There
perfect.
I
have
zoomed
in
a
lot,
so
I
hope
that
it's
all
visible
all
right,
so
welcome
to
liquid
metal.
I've
done
this
demo
now
about
10
times.
This
is
the
first
time
I've
done
it
with
to
people
who
might
actually
know
what
I'm
talking
about.
So
this
is
very
exciting,
so
I've
cut
a
lot
of
it
out.
So
I
hope
it
all
still
has
some
sort
of
narrative.
So
the
layout
here
on
the
right
I
have
two
hosts.
H
These
are
the
bare
metal
hosts
that
richard
was
talking
about.
These
are
hosts
and
equinix.
I
think
I
put
them
in
now,
so
I
can't
remember,
as
you
can
see,
they
are
running
flintlock,
because
that
is
the
micro
vm
service
that
will
call
out
to
firecracker
to
create
the
micro,
vms,
there's
also
some
notes
here
about
flintlock
and
container
d
and
firecracker.
So
if
anyone's
watching
the
video
later,
they
can
read
those
okay.
H
H
Configure
review
we're
going
to
use
psyllium
for
the
child
cluster
networking
and
then
we
can
go
ahead
and
initialize
the
providers.
So
that's
going
to
go
ahead
and
do
that
on
the
side
here,
I'm
just
going
to
get
ready
to
yeah
just
going
to
load
up
a
watch
just
so
we
can
see
the
micro
vms
come
up.
Obviously
this
is
just
like
the
state.
Is
it's
not
actually
the
representation,
but
it
looks
quite
cool.
It's
just
a
nice
way
of
looking
to
see.
What's
coming.
H
I
suppose
I
could
also
do
like
ps,
orbs
or
firecracker
processes,
but
it
looks
a
little
bit
messier
and
harder
to
read
in
black
and
white,
so
yep
still
waiting
for
cert
manager
always
takes
the
longest.
Doesn't
it
and
there
we
go
now,
it's
cracking
on
excellent,
so
that
is
our
management
cluster
all
set
up,
and
now,
let's
configure
some
stuff:
let's
have
the
cluster
name,
nice
and
imaginative.
H
I'm
just
going
to
have
one
control
plane,
node
here
and
I'll,
have
10
worker
machines
we're
using
kuvit
to
load
balance
across
the
host,
so
that
is
how
goaded
to
be
an
ip
that
I
just
happen
to
know
is
available
in
my
private
vlan.
Oh
yeah,
there's
some
interesting
networking
stuff
going
on
here.
Everyone
wants
to
know
about
that.
Just
ping
me.
I
am
in
the
the
khaki
slack
so
yeah,
let's
generate
our
cluster,
our
template.
There
we
go.
H
That
is
that
I
do
need
to
edit
a
bit
here.
So,
as
richard
said,
we
are
going
to
have
a
dynamic
scheduler
which
will
go
and
talk
to
something
like
tinkerbell
and
say:
hey
what
post
do
I
have?
We
don't
have
that
yet.
So
I'm
just
going
to
go
and
paste
in
the
ips
that
I
know
I
have
there
we
go.
There
is
one
zero
and
oh
go
away.
No,
I
don't!
Oh
sorry,
I
hit
an
extra
key
and
now
an
app
is
asking
me
if
I
want
something
go
away.
Thank
you.
H
H
So
let's
go
ahead
and
grab
the
old
secret
decode
that
and
get
the
config.
We
can
then
watch
the
nodes
come
up.
As
you
all
know,
the
control
plane
always
gets
takes
a
minute
to
come
up,
so
it's
gonna
hold
on
watching
that
for
a
while.
Until
the
control
plane
ip
is
ready
to
go
and
then
once
that's
done,
all
the
workers
come
up
pretty
damn
fast
right.
Things
are
changing.
H
So,
just
to
caveat
this,
we've
had
some
problems
in
the
last
week
or
so
with
the
scheduling
part
where
sometimes
all
the
worker
knows
come
up
on
one
of
the
hosts,
rather
than
so
balanced
across
all
of
them,
and
I'm
really
hoping
this
doesn't
have
it
happened
today.
I've
been
quite
lucky
so
far.
It
did
work.
It
worked
until
literally
this
week
it
worked
and
it
was
really
good.
So
we
don't
know
what's
happened,
but
there
we
go.
There's
something
to
come
up.
Please
come
up
on
multiple
hosts.
H
H
A
All
right,
I'm
not
seeing
any
hands
but
yeah
that
that's
yeah,
that's
great,
and
I
I
echo
what
vince
is
saying
in
chat
like
excellent
presentation,
skills
very
fun
to
watch,
and
I
also
applaud
you
for
staying
away
from
nautical
themed
names
for
your
projects.
It's
rare
to
see
in
kubernetes.
So
thank
you.
H
Yeah
we
we're
going
for
a
gun
theme.
This
time,
firecracker
flintlock
we've
got
yeah
lots
of
gun
themes
going
on.
A
I
was
detecting
the
gunpowder
element
kind
of
running
through
everything;
okay,
so
thank
you,
richard
and
claudia
again,
yaseen
you're
up
next
with
port
management.
I
Yeah,
so
it's
just
a
kind
of
a
psa
for
folks,
so
the
pr
for
handle
import
management
for
the
for
the
api
server
is
out.
It's
still
missing
the
updates
to
the
book,
but
it
should
have
like
the
change
that
is
needed.
There's
probably
another
pr
that
is
going
to
follow
up
to
the
book
that
is
gonna,
discuss
how
we're
doing
port
management
and
control
plane,
endpoint
management
in
general
when
users
supply
those
or
want
to
supply
those.
So
yeah,
please
take
a
look.
The
inputs
will
come
thanks.
A
J
Yeah
hi-
I
just
wanted
to
quickly
follow
up
on
our
last
week's
discussion
on
upgrading
clusters
to
kubernetes
1.23,
where
csi
migration
flag
is
enabled.
I
I
mentioned
last
week
I
was
having
a
travel
in
the
upgrade
process
thanks
to
yasi.
He
pointed
a
flag
that
made
things
work,
so
I
created
a
document
that
lists
a
couple
of
upgrade
scenarios
that
worked
in
kappa
just
wanted
to
share
this
with
other
in
case.
Other
providers
also
needed.
A
Awesome
and
I'm
just
noticing
here-
okay,
so
you
are,
you
do
have
the
specification
for
like
the
cloud
provider
flag
and
everything
there
is.
That
is
that
what
this
these
categories
are.
J
Right,
if
you
go
a
little
bit
below,
I
shared
the
qbm
configs
that
worked
for
each
scenario
that
I
tested.
So
since
upgrading
like
moving
to
external
for
both
csi
ccm
is
not
required.
We
can
just
use
external
cc
csi
with
interior
ccm.
J
I
tested
a
couple
of
different
upgrade
scenarios
and
those
could
create
them.
Configs
are
specific
to
each
scenario.
A
Cool,
that's
awesome,
work
yeah.
This
is
this
is
going
to
become
very
important
in
the
next
few
releases.
I
imagine.
A
Okay
and
stefan,
are
you
ready
to
go
with
our
last
demo
here?
Yes,.
A
I
Yeah
so
one
quick
thing
regarding
ccm
and
and
csi
migration.
First
thing
like
shout
out
to
sadaf
for
the
great
work,
but
just
in
this
scenario
is
definitely
not
easy.
The
second
thing
is
that
document
is
outlining
a.
I
How
can
I
say,
non-ha
migration
path
for
for
ccm,
meaning
that
at
some
point,
like
the
ccm
and
the
control
plane
isn't
able
to
is
not
going
to
be
able
to
identify
and
turn
join
in
john's
notes,
joining
or
and
joining
the
cluster.
So
this
is
this:
isn't
an
issue
really
for
cluster
api
due
to
the
way
we
do
upgrade?
I
So
if
you
refer
to
the
kubernetes
docs
on
how
to
do
ccm,
migration,
you're
going
to
see
a
different
approach,
but
that
one
is
mainly
is
mainly
targeted
like
it
is
mainly
targeted
for
the
win
for
the
windfall
h,
a
work
where
you
don't
want,
where
you
don't
want,
like
a
any
small
window
between
the
fact
of
you,
know,
new
ccm
being
up
and
the
cluster
being
able
to
to
join
clusters.
So
that's
it
like
that's
a
different
approach.
I
That's
like
the,
in
my
opinion,
like
the
the
most
pragmatic
approach,
this
one
is
more
targeted
for,
for
you
know,
for
scenarios
where
you
actually
don't
need
any
downtime
in
the
fact
or
in
the
ability
to
join
or
and
join
nodes
from
the
tests
that
I've
done
and
said.
I've
done,
like
the
timing,
is
pretty
short
and
implementing
this
approach
would
require
like
a
very
deep
rework
in
terms
of
cluster
api.
A
Okay,
thanks
justine,
that
is,
that
is
great
information.
I
know
this
is
a.
This
is
a
complicated
topic,
so
yeah,
it's
good
good
follow-up
there.
Thank
you.
A
F
Just
the
same
page,
again
yeah,
so
I
just
wanted
to
do
a
short
demo
on
a
tool
we
recently
wrote
and
merged.
Essentially,
we
spent
some
time
the
last
few
weeks
to
improve
vlogging.
D
F
Will
be
something
upstream
soon,
currently
more
like
figuring
out
how
to
do
it,
but
the
first
step
that
we
already
did
is
it's
now
possible
to
enable
json,
locking
in
call
cappy
at
least.
We
also
have
something
in
migration
document
case.
Some
providers
want
to
follow
so
now,
I'm
building
on
top
of
the
capability
to
enable
json
logging.
F
F
You
can
do
this
with
either
some
local
locks.
If
you
execute
your
antennas
locally
or
with
some
random
pro
chop
and
yeah.
So
essentially,
what
we're
trying
to
do
is
we
try
to
improve
the
feedback
loop
so
that
when
we
actually
debug
into
end
tests,
now
that
we
use
our
own
logs
to
figure
out
what
the
issue
is
so
that
can
then
incrementally
improve
our
locks
so
that
they're
actually
useful
to
debug
things.
F
I
mean
we
already
have
logs,
but
I
think
there's
a
room
for
improvement
there
yeah
and
what
that
tool
essentially
does.
Is
it
downloads
the
locks
from
somewhere
and
it
pushed
them
into
loki?
We
have
some
documentation
for
it.
So
if
someone
must
take
a
look
here
under
testing,
troubleshooting
and
event
tests
and
that's
essentially
a
tool
under
hike
tools,
you
can
pass
in
a
log
path.
The
log
path
can
be
either
just
directly
your
approach
url.
F
So
just
that
I'll
show
you
so
just
that
url
here
you
don't
have
to
click
anywhere
or
if
you
want
some
gcs
path.
So
that's
the
underlying
path
of
that
essentially
of
that
directory
here.
So
just
in
case
you
want
to
use
gcs
directly
or
you
can
use.
The
local
folder
mostly
makes
sense
if
you
run
your
entrance
locally
or
if
you
have
some
other
log
sources,
so
it
should
also
work
with
not
necessarily
only
with
cluster
api
locks.
Let's
say
it
like
that.
F
So
you
run
the
two
and
the
tool
downloads
from
here
and
pushes
to
locally
there's
some
default
url
on
with
a
local
default
port,
but
you
can
also
specify
the
port
of
your
local
instance.
If
you
want
to,
if
you
are
running
the
tilt
environment,
you
have
to,
you
have
loki
already
deployed
as
soon
as
you
use,
deploy
observability
with
grafana
and
loki.
So
then
you
get
graffana,
you
get
loki
and
you
get
on
port
forward
and
that
tool
will
automatically
push
to
that
port.
That
is
already
exposed.
F
After
that
you
can
access
your
locks
via
grafana
or
loki
also
has
some
kind
of
cli,
where
you
can
just
query
unlocks,
essentially
additionally
to
just
providing
that
command.
We
also
added
a
button
to
tilt.
So
if
you
have
tilt
here
and
if
you're
in
the
loki
tab,
you
have
a
button
here-
and
you
can
just
add
your
path
here,
so
I
already
did
it
before.
But
essentially
you
add
your
approach
up
url
here.
You
click
on
import,
logs
and
yeah.
F
Let's
just
do
it
again,
they're
already
imported,
but
I
just
repeat
it
so
then
you
see
here
that
it's
running
the
tool.
It
is
downloading
the
logs
and
it
is
uploading
them
again
and
the
result
is
that
when
you
look
at
loki,
you
can
analyze
your
logs.
From
your
end
to
end
test,
you
see
that
the
files
are
somewhere
from
gcs
and
you
can
just
do
the
usual
stuff
filter
on
the
controller
filter
on
a
machine,
cluster,
etc.
F
Yeah,
that's
it
you
don't
actually
have
to
use
tilt.
The
only
thing
that
it
depends
on
is
slow
key,
so
you
so
the
minimal
version.
Is
you
download
it
from
somewhere?
You
upload
it
in
loki,
and
then
you
can
already
query
it
with
flux
c
line.
If
you
want
to
actually
yeah
look
at
it
via
grafana,
of
course,
you
also
need
grafana.
So
just
a
very
simple
version
is
just
use
our
tilt
environment,
but
of
course
you
can
just
run
your
logins
in
somewhere
yeah.
That's
it
essentially.
C
No,
I
I
I
think
this.
This
is
great
and
we
are
just
trying
to
make,
let
me
say
log
as
a
first
class
tool
in
our
developer,
workflow,
and
so
as
soon
as
we
get
these,
they
will
be
improved
as
business
as
usual.
So,
thank
you
stefan
for
implementing
this.
Maybe
you
want
to
show
how
you
can
access
the
lock,
the
lock
ui
from
tilt.
F
Oh
yeah
sure,
by
the
way
we
have
some
more
tilts-
let's
say
a
third
walkthrough
available
somewhere
in
our
let's
chat
about
discussion
just
in
case
someone
wants
to
follow
up
on
that.
So
we
also
talked
about
a
little
bit
here,
but
I
did
a
very,
very
short
version
of
it.
But
here
are
two
youtube
videos,
depending
on
what
region
you
want.
F
You
can
look
that
up
in
more
detail
if
you
want
but
yeah.
If
you
have
look
here,
that's
just
the
resource
view
where
you
see
what
is
deployed,
but
you
can
also
click
that
view
and
here
under
observability
you
have
low
key
and
you
can
just
click
on
that
button
and
that's
essentially
just
support
forward
from
the
so
sorry
that
button.
That's
just
support
forward
from
grafana
to
localhost.
Of
course
you
can
access
loki,
that's
the
part
we're
using
to
push
locks,
but
I
mean,
of
course
you
want
to
use
the
ui.
A
Yeah
very
nice.
This
seems
like
it's
going
to
be
really
convenient
for
doing,
and
testing
and
debugging
those
tests.
Does
anybody
have
any
questions
or
comments
for
stefan.
F
A
Okay,
very
cool,
so
we
have
reached
the
end
of
the
agenda
and
no
one
has
added
more
topics
I'll
give
a
minute
here
in
case
anyone
has
questions
or
want
to
raise
their
hand
and
if
not,
then
we'll
take
back.
You
know,
20
or
so
minutes
of
our
day
here
paul
go
ahead.
Oh.
K
Yeah,
just
to
kind
of
let
everybody
know
that
at
oracle
cloud
infrastructure
we've
released
our
provider,
it's
on
our
own
github
on
blog
at
the
moment,
and
if
I
would
like
to
take
a
couple
of
minutes
next
week
to
just
sort
of
introduce
it
to
everybody,
if
that's
possible,.
K
Yeah
we
got
some
work
to
do
to
before
we
can
get
it
migrated
into
the
sig,
mainly
internal
pain,
but
that
would
be
the
idea
good
to
get
as
soon
as
possible.
A
Awesome
and
yeah
like
if
you'd
like
to
demo
or
do
a
little
presentation
on
it
next
next
week,
definitely
just
put
your
name
on
the
agenda
and
yeah,
I
think
yeah.
I
think
you
see.
A
It's
not
a
competition.
We
we
just
love
watching.
We
just
love
watching
good
entertainment
here,
excellent,
all
right,
thanks
again
paul
anyone
else.
A
Okay,
well,
I
guess,
then
we'll
call
this
meeting
to
a
conclusion.
Thanks
everyone-
and
hopefully
we'll
see
you
next
week.