►
Description
Welcome to the live stream of the Kubernetes & Cloud Native Berlin Meetup - June 2023. Doors open for the in person meet up at 5 pm. The talks will begin at 6 pm, so stay tuned.
Find more information here: https://www.meetup.com/berlin-kubernetes-meetup/events/293992360/
About this meet up: We are a group for people interested in discussions around working with, running and developing Kubernetes and other cloud native technologies. We’re excited about container infrastructure, distributed systems and learning more about managing and extending them as such.
A
A
A
I
think
this
should
be
fine.
We
shouldn't
have
yes,
I
fixed
it
good.
Okay,
welcome
to
the
kubernetes
and
Cloud
native
summer,
special
Meetup,
so
happy
so
many
of
you
are
here,
I'm,
really
hoping
some
more
trickle
in
from
the
past.
We've
seen
that
that's
usually
the
case,
we'll
have
pizza,
networking
and
break
time
after
the
first
talk.
We
have
two
talks
this
evening.
That's
our
usual
program
and
we're
sticking
to
it.
A
A
He
was
the
founder
and
CEO
and
we're
now
part
of
Microsoft,
but
but
we
try
to
have
the
community
Vibe
going
with
all
of
these
meetups,
and
you
know
we
try
to
have
that
despite
our
connection
and
also
to
Foster
our
connection,
sometimes
because
we
are
doing
good
work
even
within
Microsoft
and
yeah,
but
I
do
also
want
to
thank
Microsoft
for
the
fact
that
we
still
have
this
office
space
for
some
time
now,
which
used
to
be
the
previous
office
space
and
we're
quite
delighted
with
the
few
meetups
that
we
did
put
up
here
after
relaunching
it
and
rebranding
it
posts
pandemic.
A
Last
year,
we've
had
four
meetups
so
far
in
this
office,
and
this
is
the
fifth
one.
This
is
a
call
for
anyone
interested
also
in
organizing
or
co-hosting
or
co-organizing,
some
of
the
kubernetes
and
Cloud
native
Berlin
meetups,
going
forward.
If
you'd
like
to
do
this
in
your
office
space.
If
you'd
like
to
get
us
in
touch
with
anyone
who
would
like
to
do
it
in
your
office.
If
you'd
like
to
host
co-host,
please
please
have
a
chat
with
me.
A
I'm
around
just
just
grab
me
a
little
bit
first
for
a
few
minutes,
and
we
can
have
a
good
chat
about
it
and
and
we
can
have
an
elaborate
discussion
later
as
well
yeah,
let's
just
get
started
with
today's
program,
because
I
feel
like
we
are
starting
with
a
little
bit
of
a
delay,
but
it's
nice
because
it's
summer,
right
and
I
know
it's
been
a
little
warm
in
the
last
few
days,
but
we're
very
happy
to
have
had
planned
this
today.
A
A
We
have
two
talks
today,
one
by
Hannis,
the
other
by
rocas,
and
both
of
these
stocks
are
30-minute
sessions,
we'll
have
a
10
minute.
Q,
a
right
after
I've
noticed
a
lot
in
in
our
informal
settings
of
meetups,
which
we
also
enjoy
and
cherish
a
lot.
A
Something
that
comes
up
often
is
that
when
a
talk
is
interesting
and
everyone's
really
geared
up
and
we're
really
engaged
into
it,
it
can
go
on
for
a
little
longer.
I
just
have
to
say
that
I'll
have
to
be
a
little
bit
of
referee
there
and
stop
the
talk
at
you
know
once
it
crosses
the
10
minutes
or
15
minutes
q
a
round-
and
you
can
continue
to
have
discussions
with
the
speakers
even
later-
feel
free.
This
is
your
community
Meetup
and
yeah.
A
There
are
also
a
lot
of
giveaways
from
cast
Ai
and
Microsoft.
We
have
some
Community
meetups
and
initiatives
that
we've
organized
there
are
t-shirts
and
swag
stickers
from
there
so
feel
free
to
grab
them
during
your
break.
There's
beverages
in
the
fridge
pizzas
will
be
here
in
about
in
some
time,
but
please
don't
be
distracted
by
them.
They
will
be
there
for
you
and
therefore
your
enjoyment,
yeah
there's
one
more
thing:
I
wanted
to
touch
upon
and
I
I.
Just
remember
what
that
is.
It's
the
code
of
conduct.
A
A
We
value
diversity,
inclusion,
representation
and
all
of
that
and
everyone's
value,
everyone's
opinion
counts.
Everyone's
opinion
has
value,
please
be
respectful
if
you
find,
if
you
find
that
something's
not
really
going
in
a
specific
order
or
is
not
respectful,
please
please
feel
free
coming
to
come
and
talk
to
me.
We
can
have
a
dialogue
about
it
and
we'll
try
and
fix
the
situation,
but
the
idea
is
to
work
together
as
a
community
and
get
through
this
and
enjoy
ourselves.
This
is
for
you.
A
B
Thank
you
all
right,
yeah.
My
name
is
Alice
Probst
I'm
from
kubomatic
and
chromatic.
If
you
don't
know
the
company,
yet
we
are
doing
or
the
biggest
com
product
of
the
company
is
called
kkp.
B
B
In
our
case
we're
talking
about
Networks,
so
you
want
to
isolate
your
Network
both
physically,
disconnecting
it
or
I
mean
that's
traditionally
the
the
case,
but
now
now
more
and
more
we're
referring
to
the
case
where
you
bring
in
a
firewall,
for
example.
So
it's
not
really
air
gap,
but
in
a
sense
you
try
to
get
as
close
to
an
air
gap
as
possible.
B
That
being
said,
so
there
are
different
flavors
of
it.
Let's
say,
depending
on
how
strict
you
you
want
to
run
it,
you
could
argue
that
maybe
you
do
an
air
gap
later
on.
So
during
provisioning
time
of
your
notes,
you
allow
everything
you
or
limited
access,
but
to
the
outside.
There
is
access,
and
then
you
shut
it
down.
B
So
you
make
sure
that
during
run
time,
you'll
you
are
you're
shutting
it
down
or
you
are
gapping
your
environment,
I
think
the
most
sensible
thing
or
approach
everyone
could
do
is
to
have
an
allow
list
and
say:
okay,
I
have
a
list
of
trusted
sources
and
you
can
connect
to
all
of
these
freely
and
then
the
more
strict
your
environment
is
or
the
more
regulations
you
have.
Then
we're
actually
talking
about
the
blocked
or.
C
B
B
If
you
shut
it
down,
no
one
connects
it
problem
is,
of
course
you
can't
access
anything
and
yeah
you.
You
want
to
have
downloads
only
from
trusted
sources,
so
you
can't
just
allow
any
download.
B
Good
reasons
are
also
to
improve
stability
and
resilience.
Github
is
not
so
reliable
anymore,
sadly,
I
mean
they
can
go
down.
Ubuntu
mirrors
are
down
sometimes
and
also
maintainers.
There
are
some
packages
suddenly
gone.
If
someone
is
an
node.js
developer,
who
remembers
that
small
packages
can,
if
they
go
lost
that
have
a
big
impact?
B
You
you're
gonna
wonder
why
I'm
getting
into
all
the
dependency
stuff
with
top
talks
to
airgap,
but
we
will
get
to
this
performance-
is
also
a
thing
you're,
depending
on
outside
sources,
so
you're,
depending
on
their
performance,
and
we
remember
when
GitHub
and
not
Docker
included
their
rate
limiting.
B
B
You
need
to
have
an
audit
log.
What
is
actually
accessed.
You
need
to
have
a
software
build
of
of
material,
so
you
can't
just
easily
run
any
kubernetes,
and
this
is
actually
where
the
kind
of
this
is
actually
the
the
reason
why
companies
are
coming
to
us
and
say
we
are
interested
in
kubernetes,
but
we
can't
run
this
like.
We
can't
run
this
on
our
premises.
It's
completely
air
gapped.
B
B
How
does
it
look
like
you
start,
as
always,
just
with
a
simple
kubernetes
installation,
you're,
really
happy
that
it
runs
and
it
scales
and
that
everything
is
fine,
and
then
you
realize
you're
doing
kubernetes
and
everything
it's
not
out
of
the
box
available
and
you're,
adding
more
and
more
dependencies
to
other
repositories,
most
notably,
of
course,
container
image
repositories.
So
Docker
Hub
is
the
biggest
or
most
prominent
one,
but
you
soon
end
up.
B
Adding
to
that
and
the
the
thing
is
that
that
situation
you
realize
okay,
I'm
I
need
to
be
aware
of
these
sources.
I
can't
just
download
anything
from
anywhere.
B
So
how
do
we
do
this?
How
do
we
achieve
an
IR,
gapped,
kubernetes,
environment,
installation
I,
would
say
there
are
three
things
to
it
or
actually
two
and
one
caveat:
do
you
want
to
control
tightly
control
your
sources,
so
you
need
to
have
some
system
in
place
where
you
can
make
sure
that
what
you
download
is
limited
and
and
screened
and
checked.
B
I
would
argue,
and
happily
to
discuss
this
later,
that
you
don't
want
to
waste
time
on
and
money
on,
pre-baking
images.
There
might
be
good
reasons
too,
let's
get
to
it
later,
but
I'd
say
yeah
anyway
and
then
leverage
instead
custom
provisioning
logic.
B
So
how
do
we
control
our
sources?
That's
that's
the
thing
I
mentioned
earlier.
Most
likely
you
already
have
something
in
your
Enterprise
architecture,
depending
on
how
big
your
company
is,
but
maybe
you
already
have
a
jfrog
or
an
axis
in
place,
use
that
as
a
central
store
for
your
software
and
then
use
that
as
an
allow
list.
So
in
this,
in
this
case,
you
immediately
have
limit
limited
the
access
and
everything
just
goes
through
that
store
at
the
end.
You
also,
then,
can
use
all
of
the
features.
B
These
massive
tools
offer
vulnerability
scanning,
for
example.
Also,
if
sbomb
is
interesting,
for
you
can
get
get
this
out
of
jfrog
and
access
Harbor
is
getting,
of
course,
more
and
more
interested
interesting
for
I.
Think
our
community,
but
it's
not
there
yet
and
they're
focusing
on
oci.
So
if
you
need
to
provision
machines
and
install,
for
example,
Ubuntu
or
any
other
stuff
binaries,
you
need
to
look
elsewhere.
B
Yeah
now
I'm
making
the
case
against
pre-baking
images.
I
know
in
the
when
you're
talking
about
air,
getting
kubernetes
people
jump
to
the
conclusion
to
pre-baking
everything
on
there
you
can
do
this
depends
on
your
case.
I
would
argue
that
you
still
need
some
logic
to
provision
your
machines
right,
because
you
still
have
some
code,
which
then
goes
render
some
cloud
in
it
and
Provisions
your
machine.
So
you
you
already
have
that
so
leverage
this
put
everything
in
there
and
don't
waste
your
time
with
with
pre-baking
too
much.
B
You
end
up,
making
things
too
fragile,
depending
on
your
environment
again
and
and
your
setup.
Maybe
you
are
in
a
bigger
company
and
you
have
different
verticals
different
access
to
different
other
parts
of
your
infrastructure,
different
access
credentials.
You
need
to
save
these
images
all
over
the
place,
so
it
adds
up
some
storage
and
traffic
things.
You
need
to
think
about
so
sure
pre-packing,
some
part
of
it
makes
sense.
B
I
would
say
mostly
off-the-shelf.
Image
is
good
enough,
but
yeah
if
you're
pre-backing
really
think
about
a
good,
solid,
bare
image
and
that's
and
then
the
rest,
all
the
conditional
logic
you
might
have
put
that
in
your
provisioning
logic,
and
and
now
we
are
there
where
it's
getting
interesting.
B
If
you,
if
you
have
let's,
say
a
static
cluster
and
you're
bootstrapping,
you
know
it's
a
terraform.
You
have
some
remote
exit
code,
some
clue
code.
You
do
yourself,
it's
still
good
enough
right,
like
you,
can
then
take
this
and
overwrite
every
download
URL
pointing
this
to
your
jfrog
or
Nexus
or
Harbor.
If
you
only
am
using
oci
with
cops,
you
can
actually
override
all
the
URLs.
B
So
in
that
case,
you
can
easily
just
say
all
the
downloads
are
going
to
or
downloaded
now
from
the
central
store
and
with
cluster
API
and
also
kkp,
which
implements
part
of
the
cluster
API.
This
is
actually
where
you
have
the
most
flexibility,
because
there
you
have
a
templating
and
a
provisioner
which
then
takes
the
takes
the
templating
and
renders
cloud
in
it
for
you
there
you
can
basically
say
Okay
I
want
to
customize.
B
So
this
is
a
bit
how
it
looks
like
would
be
looking
like,
so
you
have
some
template
in
your
cluster
and
you
have
a
provisioner.
So
the
template
is
where
you
have
your
custom
bootstrap
link
logic.
You
can
have
a
template
per
operating
system,
so
we
have
customers
who
want
to
put
a
cluster
based
on
Ubuntu
or
Rel,
or
any
of
these,
so
imagine
how
this
would
scale
in
terms
of
pre-baking.
Instead
of
that,
we
would
have
a
template
for
each
of
these
cases.
B
B
In
this
case
you
have
a
conditional
in
your
pre-baking
in
your
bootstrapping
logic,
so
there's
a
template
somewhere
in
your
cluster
and
then
you
have
the
provisioner
who
actually
knows
how
to
talk
to
the
IES
talks
to
the
template,
renders
that
to
Cloud,
init
or
also
ignition,
if
you,
if
you
need
that
and
then
bootstraps
the
node
from
that,
so
this
I
would
say
is
the
ideal
case
and
yeah.
This
is
how
roughly
it
looks
like
for
cluster
API
and
kkp
I.
Don't
know
if
you
can
actually
see
this.
B
B
Docker,
repo
and
yeah
some
other
stuff
and
would
allows
that
allows
me
then
to
say
in
my
bootstrap
script
to
just
disable
the
default
repo
and
then
continue
installing
everything
and
it's
instead
of
that,
instead
of
going
to
the
internet,
it
goes
to
my
central
store
before
that.
Obviously,
I
set
up
my
my
central
store,
my
my
Nexus
or
jfrock,
and
edit
repositories
there,
which
is
a
bit
tricky.
You
need
some
expertise
in
handling
these
beasts,
but
you're
most
likely,
then
by
support
of
these
companies
I'm
not
endorsing
them
by
the
way
but
yeah.
B
Here's
one
of
the
first
pro
tips,
I
think
it's
quite
nice
actually
figured
out
a
bit
too
late,
but
with
container
D
you
can
use
their
mirror
feature,
so
you
can
actually,
on
your
note,
say:
here's
a
mirror
for
all
the
image
pools.
Instead
of
going
to
Docker
Hub,
slash
underscore
slash,
redis
go
to
my
central
store.
B
This
is
how
it
looks
like-
and
this
is
this
is
the
only
configuration
you
need
on
your
node-
to
limit
access
to
the
internet
and
instead
route,
everything
to
your
central
store
and
the
cool
thing
about
this
and
that's
supported
by
Nexus
and
j4.
No
sorry
yeah
I'm
getting
to
the
grouping,
but
this
is
actually.
B
The
good
thing
is
that
the
path
structure
of
container
Registries
are
homogeneous,
so
everything
but
in
front
can
be
changed
and
then
the
the
registry
resolves
everything
for
you.
So
no
matter
what
URL
is
used
in
your
pod
definitions
or
in
your
hum
charts
that
doesn't
need
to
be
changed.
So
you
you,
don't
even
need
to
mention
this
to
you
to
your
teams.
B
B
That's
not
something.
I
saw
for
Harbor
I'm,
not
sure
if
they
implemented
it,
it
would
be
nice
because
then
you
can
basically
say:
I
have
all
of
these
repositories
in
use
by
my
teammates,
most
notably
registry
minus
one
Docker
IO
Quay.
Of
course,
GCR
I
o,
and
instead
of
having
a
repository
URL
handed
out
or
configured
in
this
mirror
configuration
I
can
configure
a
group
Repository
ending
up
having
a
URL
pointing
to
that
group.
Repository
and
that's
that's
my
URL
I
can
use
internally.
B
B
B
B
Then
you
need
to
manage
the
store
and
again
like
this
can
be
a
bit
yeah,
coming
with
a
cost
of
course,
but
potentially
your
company
already
has
that.
So
you
can
leverage
this
and
then
you
need
to
think
about
your
provisioning
logic.
B
So
if
you,
if
you
end
up
having
a
lot
of
conditionals
in
your
scripts
and
and
become
like
in
your
homegrown
scripts
and
gets
bit
out
of
hand,
think
about
Solutions
like
cluster
API
or
or
kkp,
but
the
thing
is
you
need
to
understand
that
here
You
could
argue
how
how
our
air
gap
this
is
actually
like.
The
artifact
store
has
access
to
the
internet
and
down
here,
I
kind
of
you
know:
I
can
access
a
system
or
yeah
in
an
environment
which
then
has
access
to
the
internet.
B
B
All
gapped
case
and
what
you
do
then
is
or
what
what
I
would
suggest
you
you
could
look
into
is
that
you
then
have
a
replication
so
the
store
up
here,
which
could
be
your
company's
company-wide
artifact
store,
which
has
access
to
the
internet.
You
can
use
that
you
configure
all
the
repositories
you
need
and
then
you
use
replication
and
replicate
your
own
artifact
store
down
here.
So
there's
a
complete
Gap
here
and
your
artifact
store
here
is-
is
basically
fed
by
a
snapshot
the
what
the
way
you
transport
the
snapshot.
B
This
is
up
to
your
and
your
security
constraints.
This
could
be
a
USB
drive,
so
you're
taking
a
snapshot
from
up
here.
Obviously
that
means
you
need
cash
warming
right,
so
you
don't
have
this
pull
through
cache
benefit
that
you
don't
need
to
care
what's
needed
when,
because
it
just
pulls
it
once
and
caches
it.
You
need
to
know
this
and
then
cash
warm
everything,
and
then
you
do
the
snapshot,
save
it
somewhere,
move
it
down
there
and
replicate
your
your
environment
and
then
the
rest
is
the
same.
B
So
you
you
have
your
provisioning
logic,
which
knows
how
to
point
everything
to
your
store
and
everything
should
be
fine,
yeah,
that's
more
or
less
it
I
hope.
You
remember
that
airgift
has
many
benefits.
Even
if
you
don't
have
those
requirements,
I
would
say
that
after
you
grow
in
size
and
your
installation
becomes
fairly
big,
it
has
so
many
benefits
that
I
would
say
it
becomes
a
must
or
certain
aspects
of
it.
B
If
you
yeah
run
into
these
cases
where
provisioning
becomes
out
of
hand
and
and
or
you
you
do,
pre-backing
images
and
you're
not
so
happy
think
about
yeah,
focusing
some
time
on
that
and
yeah
get
the
most
out
of
your
store.
So
once
you
have
that
use
it
for
vulnerability
scanning
and
so
forth,
and
that's
it.
Thank
you.
A
Okay
sure
we
have
10
minutes
for
questions
now
and
we
don't
really
have
a
system
where
we
pass
a
mic
around.
As
you
can
imagine
we're
working
with
this
quite
informal
here,
but
as
soon
as
you
have
a
question,
maybe
you
can
be
loud,
because
this
is
also
being
streamed
for
a
live
streamed
audience.
Maybe
you
can
repeat
the
question
on
the
mic,
so
everyone
has
the
benefit
of
it.
B
B
Okay,
so
the
question
is
how
how
the
user
or
the
applications
running
on
the
cluster
exchange
the
data
when
everything
is
air-gapped?
Well,
that's
that's
actually
a
problem
for
for
our
customers,
but
usually
they
they
run
software
there,
which
is
not
necessarily
going
to
the
outside.
So
we're
talking
about
software
which
handles
medicinal
data
or
really.
B
Well,
the
data
is
yeah,
obviously
not
transported
through
the
internet,
that's
true,
but
that
might
be
an
environment
where
all
the
systems
involved
also
in
critical
infrastructure.
For
example,
all
the
systems
involved
are
behind
the
same
air
gap.
B
B
Right
yeah,
so
the
question
was
about
Edge
notes
who
lose
connections.
I
would
say
that
I
mean
is
that
this
is
a
complete
different
case.
Right
I
mean
suddenly,
there
is
an
air
gap
for
that.
There
are
different
tools,
I
think
Cube.
Edge
is
a
bigger
project.
I'm
not
super
familiar
with
this,
but
we
also
have
customers
who
looked
into
this.
B
The
good
thing
is
because
of
kubernetes
the
the
software
continues
to
run
right.
In
those
cases
you
wouldn't
have
something
like
a
cache
on
on
the
nodes
which
saves,
saves
the
API
calls
and
then
replace
them
can't
remember
exactly
what
tool
that
was,
but
yeah.
B
True
I
mean
this
is
where
tagging
is
helpful,
at
least
for
also
eye
images,
and
then
I
would
argue
that
as
long
as
you
pull
exactly
that
Shar
bit,
then
you
should
be
sure
that
the
container
is
what
you
want.
B
I
think
that
yeah
yeah,
so
exactly
that
I
would
highly
advise
to
to
always
use
the
digest
at
the
end
yeah
yeah.
If
you
do
use
latest
yeah,
it
can
I
I.
Think
in
that.
In
that
case
it's
not
different
than
when
you
use
latest
and
be
pulling
from
Docker
directly.
For
example,.
B
Well,
yes,
and
no
of
that,
that's
that's
a
configuration
thing.
You
can
then
do
on
the
registry
on
on
j4,
for
example,
Nexus
you
can
say
how
many
times
it
should
invalidate
the
cache
and
how
you
handle
these
kind
of
cases.
So
you
can
really
tweak
it
depending
on
on
the
case,
but
I
think
to
be
sure.
I
would
always
advise
to
add
the
the
Digest.
B
B
B
I'm
not
sure
if
I
understand
the
question
correctly,
because
I
mean,
of
course,
if
you,
if
you
want
to
get
to
the
logs
or
anything
in
your
aircraft
environment,
you
need
to
somehow
developer
access
there,
so
there
might
be
a
there
must
be
like
some
VPN
connection
going
yeah
and
if
you're
talking
about
the
access
logs
of
what's
getting
pulled
and
what's
getting
accessed,
that's
something
where
this
artifact
store
helps,
especially
for
audit
logs,
for
example,
you
can
get
all
of
this
out
of
your
Nexus
or
your
jfrog
or
your
Harbor.
B
B
B
There
must
be
some
cases
for
that,
but
I'm
not
sure
if
they
allow
you
to
fiddle
with
us,
I
mean
I
just
learned
that
you
can
somehow
turn
off
the
auto
scaler
of
them
and
and
do
it
yourself,
but
not
sure
how
you
do
it
for
the
control
plane.
For
example,.
B
B
When
you
get
required
by
this,
that's
good
with
somehow.
B
So,
basically,
on
the
Node
level,
you
make
sure
that
containers
are
always
pulled
from
your
from
your
Center
store,
so
it
doesn't
matter
where
and
how
and
what
actually
yeah
it
runs
or
starts
pots
on
your
cluster
and
what
kind
of
URLs
they
put
in.
It
will
always
be
pulled
from
your
from
your
central
store.
B
It's
it's
both
it's
both,
so
you
want
to
download
the
cubelet
and
you
want
to
download,
coordinates
and
your
application
containers,
so
it
it's.
It's
the
whole
the
whole
catalog
of
of
artifacts
we're
talking
about
so
at
bootstrap
time
and
during
runtime.
A
We
have
Refreshments
laid
out
there,
but
my
suggestion
would
be.
We
have
a
slight
delay
with
the
program,
so
I
think
we
can
even
let
me
check,
we
can
even
come
back
at
7
45.
That
should
be
okay,
yeah.
All
right
then
finish.
Bus.
A
A
I'd
like
to
give
a
special
word
of
thanks
to
cast
AI
I,
don't
see
Chris
around,
but
he's
the
one
who
also
brought
some
of
the
swag
in
the
t-shirts
behind
and
today's
meetups
Refreshments
are
thanks
to
them.
So
big,
thank
you
to
them
would
also
be
nice.
A
If
you
want
to
chat
with
rokas
or
Chris
later
about
the
stuff,
they
do
what
they
build.
Yeah.
That's
that's
pretty
much.
It
there's
some
more
members
from
the
cast
AI
team,
but
feel
free
to
meet
them
at
your
own
time
and
and
and
see
how
you
want
to
go
from
there.
One
thing
about
all
the
refreshments,
though
we
were
just
joking
about
this
there's
a
lot
as
you
can
see.
We
clearly
predicted
a
lot
more
in
terms
of
the
head
count,
but
it
happens.
A
Sometimes,
there's
no
perfect
science
too.
How
many
people
would
actually
turn
up,
but
please,
please
feel
free
to
grab
a
box
or
two.
If
you
want
and
and
go
home
absolutely
like,
don't
do
not
even
hesitate
to
not
even
have
to
ask
just
feel
free
to
grab
as
many
as
you
want
and
go,
give
it
to
the
community,
give
it
to
your
neighbors.
A
We
are
very
happy
to
share
we're
very
happy
to
care,
so
just
go
ahead
with
that
all
right
without
further
Ado,
then
we
have
rokas
here
is
going
to
talk
to
about
talk
to
us
about
cost
optimizations,
while
working
on
the
cloud
and
a
lot
of
kubernetes
related
topics
as
well.
There
so
I'm
also
quite
interested
in
understanding
how
you
can
actually,
because
this
is
a
pet
topic
that
I
think
that
keeps
coming
up
in
lots
of
teams.
How
do
you
save
costs,
though?
How
do
you
save
costs?
A
Why
Cloud
costs
keep
mounting
and
yeah
we'll
hear
more
about
that
from
broadcast?
Take
it
away.
C
So
guys
having
fun
yep,
so
I'm,
Rochas
billavichus,
as
I
was
presented
from
Cassie,
been
like
the
company
from
a
birth
of
it
like
was
the
second
developer
there,
so
quite
exciting,
Journey
there,
but
throughout
my
career,
I
have
seen
like
many
different
roles
and
like
from
from
sysops
devops
monitoring
development
architecture
and
now
I'm
engineering
manager
so
quite
interested
to
see
all
the
developments
like
from
Hardware
to
platforming
to
actual
writing
software
and
and
today
I'll
talk
about
optimizing,
kubernetes
costs
and
performance,
like
my
first
interaction
with
kubernetes
was
in
2017..
C
It
looks
like
yesterday,
but
it's
six
years
ago
right.
So
hopefully
it
will
be
relevant
to
you
and
you
may
take
something
out
of
this.
C
So
yeah,
let's,
let's,
let's
start
so
what
is
cost
and
usually
how
do
you
get
cost
like
you
get
a
build
right
and
then
you
need
to
understand
like
what
what
this
bottom
line
is
from
from
what
does
it?
C
What
does
it
contain
like
what
for
what
do
I
have
to
pay,
and
maybe
it's
easier
in
a
grocery
grocery
store,
but
if
you're
on
a
cloud-
and
this
is
you
know,
Cloud
native
Meetup-
so
we'll
be
focusing
mostly
on
cloud
Technologies,
not
on-prem-
or
things
like
that,
so
it
becomes
a
bit
more
difficult
for
some
folks.
C
So
how
many
of
you
have
seen
a
cloud
bill?
Could
you
raise
your
your
hand
like,
and
did
you
like,
go
deeply
into
like
what
how
much
your
service
costs
inside
all
those
and
like
for
kubernetes?
C
It's
kinda,
you
don't
get
a
line
there
in
the
bill
could
burn
your
kubernetes
costs.
2
000
right,
it's
more
a
bit
different
as
cloud
is
built
from
Aurora
sources
like,
and
then
you
get
a
list
of
different
kinds
of
those
and
many
more
lines
and
more
lines
and
more
lines.
I'm,
not
surprised
that
this
guy
looks
like
this.
C
So
what
resources
does
kubernetes
like
use,
so
it's
compute
Hardware
but
straightforward
notes,
right
and
and
Cloud
it's
Network
as
well,
not
obvious
to
everyone,
but
sometimes
the
bill
explains
you
that
you
there's
a
network
there
and
you
have
to
pay
thousands
for
egress
and
things
like
that
and,
of
course,
their
storage.
But
today
I
will
focus
mostly
on
compute
and
Network
and,
as
I
said,
like
I'm
I'm
being
exposed
to
this
in
in
current
row,
I
mean
and
cast
Ai,
and
we
are
helping
companies
to
optimize
kubernetes
and
their
infrastructure.
C
So
all
the
strategies
I'll
be
talking
in
this
presentation
would
be
more
of
the
things
about
work
in
real
life
and
that
we
are
applying
I
won't,
be
I,
won't
be
doing
a
sales
pitch
or
nothing.
But
you
can
take
these
strategies
and
you
know:
do
it
yourself?
There
are
many
tools,
open
source
tools
as
well,
which
help
with
different
parts
of
those,
and
we
can
chat
about
specific
tools
later
on,
maybe
during
q,
a
or
networking
time
together.
C
So,
let's
maybe
start
from
the
simplest
ones.
No
digitalization
so
I'll
give
an
a
pretty
simple
example.
Like
streamlined
cluster
free
nodes,
each
of
those
nodes
have
Cuba
training.
It
has
an
overhead,
that's
the
price
you
have
to
pay
for
having
kubernetes,
and
then
you
have
like
pods
inside
those
nodes
and
by
default
kubernetes
scheduler
works
against
cost.
Optimization
like.
If
you
deploy
a
new
roll
out,
a
new
deployment
create
a
new
pod.
It
will
scheduler
will
evenly
distribute
the
pods
throughout
the
nodes.
C
C
What
you
can
do,
like
you,
can
compact
the
nodes
manually
or
with
some
tools
or
and
just
move
few
parts
from
one
node
to
another.
You
can
do
it.
The
calling
the
API
of
node
or
drain
a
node,
for
example.
That
would
be
I
guess
the
most
simple
example
that
you
can
just
okay.
So
there
is
a
used
capacity.
C
As
you
know,
we
are
setting
up
Auto
scalers,
like
that
new
nodes
would
be
created
when
your
application
scales
up
scales
down,
but
the
scheduler
works
against
that,
as
it
never
makes
those
nodes
empty
and,
as
you
can
see
in
this
graph,
like
there's,
no
compaction
active
compaction,
enabled.
C
C
But
if
we
applied
the
the
strategy
which
I
showed
previously,
it's
very
easy
to
go
to
this
state
I.
Can
this
example
even
has
a
a
Headroom
of
like
it's
10,
maybe
that
you
know
it
will
always
have
additional
capacity
available.
So
if
you
would
create
new
pods,
we
wouldn't
need.
C
You
know
to
wait
for
auto
scaling
to
kick
in
and
add
a
node
and
that's
a
few
minutes
delay
and
you
know
it's
not
we're
Building
Systems
not
to
save
money,
but
that
we
would
work
the
way
we
want
to
work
without
the
delay
and
but
you
get
the
drill
right
so
so
I
would
save,
can
easily
give
you
like
30
percent
in
your
daily
routines.
If
you
start
looking
into
What's
Happening
Here,
that's
easily
achievable.
C
C
This
is
pretty
close,
and
maybe
you
were
saying
about
instance,
families,
but
so
yeah,
it's
like
five,
seven
hundred
Gap
and
different
cloud
has
different
count
and
selecting
the
the
exact
ones
that
would
make
sense
for
your
application
is
really
hard,
but
it's
also
impactful
like
here.
We
have
different
instance.
Families-
and
there
are
you
know,
as
you
see
listed,
I
want
repeat
myself,
you
can
read
it
in
a
slide,
but-
and
these
are
different
instance,
families
and
different
applications-
utilize.
C
Those
families
differently
like
if
you
have
a
CPU
intensive
workload,
memory
intensive
workloads,
it's
it's
beneficial
to
select
the
instance
that
fits
your
application,
even
the.
If
the
instance
is
more
expensive,
because
your
application
will
work
faster,
it
will
use
less
resources
compared
to
different
instance
types,
so
the
Delta
will
be
greater
and
the
system
will
run
smoother.
C
I
even
remember
one
use
case
in
our
system
that
you
know
you
can
have
a
micro
service
which
is
doing
several
things.
You
know
you
and
you
can
identify
that
one
part
of
microservices
memory
intensive
another
part,
is
CPU
intensive
and
even
change
your
architecture.
By
observing
you
know
how
your
application
is
working
and
maybe
split
it
in
to
microservices,
and
then
you
can
more
efficiently
scale
them
and
select
the
specific
infrastructure
that
they
would
utilize
better
so
and
when
speaking
about
this,
it's
there's.
Sometimes
there
is
a
gap
between
developers
and
infrastructure.
C
People
like
and
there's
not
I
wouldn't
say
friction,
but
then
there
are
two
parties.
You
need
to
spend
energy
to
reach
certain
goals,
so
it's
very
good
I'm
always
advocating
that
developers
would
at
least
try
to
understand
more
how
infrastructure
works
so
the
end
result.
The
whole
system,
you
know,
would
be
done
more
like
how
to
say
Advanced
or
sophisticated.
That's
the
word
so
going
further.
This
is
a
example.
C
How
different
can
CPU
utilization
become
like
just
by
moving
from
general
purpose
to
compute,
optimized
and
I'm,
always
more
focusing
on
CPU
than
on
memory
when
talking
about
cost,
because
CPU
is
much
more
expensive
in
Cloud
than
memory,
it's
like
80
to
20
or
85
to
15
percent
ratio.
What
you
pay
at
the
bottom
line
between
these
resources
so
always
prefer
optimizing
on
CPU,
then
looking
into
that
and
it
will,
you
always
shift
better
results.
C
I
selected
this
it's
a
random.
Actually,
it's
a
random
image.
I
just
searched
for
something
that
represents
that
I
saw
in
reality
like
how
we
optimized
my
teams
service
and
the
results
were
quite
dramatic
in
a
way
as
in
grafana,
for
example,
you
would
see
like
10
replicas
running
on
different
nodes
and
CPU
utilization
would
be
all
over.
The
place
like
one
replica
is,
has
much
higher
utilization,
another
lower
and
so
on.
But
then
we
looked
into
that
and
selected
the
instance
types
that
fit
the
use
case.
C
Then
it
was
a
flat
line.
It
was
much
easier
to
tune
the
requests,
for
example
in
kubernetes.
How
much
do
they
need?
Because
you
know
the
performance
of
the
CPU
is
they're
running
on
okay,
so
we
talked
about
instance,
families,
let's
bring
it
a
bit
more
about
architecture
of
CPUs
arm
is
on
a
roll
and
last
time,
recent
years,
certain
Cloud
providers
are
pushing
this
like
a
train
for
people
to
use
it.
C
So
it's
quite
and
it's
and
you
can
leverage
that
quite
easily
by
Building
multi-architecture
images
like,
for
example,
in
go.
You
can
easily
generate
artifacts
either
for
one
architecture
or
forever
and
similar
to
other
languages,
and
if
you
instruct
Docker
to
also
build
a
multi-architecture
image
with
those
do
artifacts
which
support
both,
you
can
be
more
flexible
in
your
kubernetes
environment.
C
By
saying
you
know,
you
put
a
label
on
application
that
it
supports
both
architectures
and
then
the
infrastructure
team
can,
you
know,
use
either
arm
either
md64
architecture
and
further
on
optimize.
That
then
I
last
checked
the
prices
of
the
different
instance.
Types
arm
is
on
a
similar
price
level
as
AMD,
but
actually
it
it
does
more
work
with
the
same
Cycles,
so
your
performance
will
be
better
you'll
be
paying
around.
Similarly,
why
I'm,
also
like
arm
like,
can
I
change
the
laptop
from
Intel
to
arm
it's
it?
C
It's
not
like,
like
a
portable
heater
anymore
and
I,
really
like
our
planet,
so
I
would,
from
that
standpoint
it's
you
know
even
not
looking
like
as
an
engineer,
but
as
a
being
more
efficient
than
spending
less
energy
on
running
the
data
centers.
C
If,
if
that
hits
that's
another
reason
to
move
to
our
okay,
let's
talk
about
the
next
topic.
I
would
call
it
elephant
in
the
room.
Now
we
are
talking
about,
like
I
was
talking
about
nodes
instance
types
and
how
to
select
the
right
one
architecture
and
so
on,
but
usually
like.
We
just
put
requests
on
application
right
and
how
do
you
put
requests
on
your,
maybe
I,
don't
know
how
many
of
you
are
doing
that,
but
the
ones
who
are?
C
C
C
So
you
know,
if
you
put
like
six
CPUs
and
10
gigs
of
RAM
and
you'll,
then
check
what's
the
actual
usage,
so
then
you're
wasting
money
and
it
doesn't
matter
that
you
selected
compacted.
Then
oh,
it's
you
know.
Well
it
matters.
You
know,
but
you're
living
a
lot
on
the
table.
If
you're
not
looking
into
this
as
well-
and
this
might
be-
you
know,
usage
at
the
time.
Usually
the
recommendation
is
to
set
a
bit
higher.
If
you
want
the
best
performance,
you
know
some.
Sometimes
it
spikes
a
special
their
memory.
C
C
Okay,
so
now
we
went
for
optim,
optimizing
our
nodes
workloads,
but
the
it's.
It's
not
the
end
of
optimization
and
Cloud.
There's
a
Market
there's
dynamics
of
data
centers
and
it's
not
like
on-prem
I
bought.
You
know,
20
servers
and
that's
the
capacity.
C
Like
the
list
price
of
on
demand,
instances
is
clear:
it's
a
list
price.
You
can
check
that
and
so
on
most
of
the
companies
go
okay,
we
use
a
lot
of
them.
Let's
make
a
better
deal,
we're
a
great
user
of
a
cloud.
Let's
buy
reserved
instances
and
one
caveat
with
that:
they
that
usually
most
of
them,
we
don't
even
buy
the
instances
which
makes
most
sense
because
we
didn't
do
a
lot
of
Investigation
about
the
things
which
we
talked
previously.
C
C
However,
ways
you
just
you'll
buy
pay
for
nothing,
but
the
third
guy
or
girl,
I
don't
know
spot
niq
is
like
way
cheaper,
10
percent,
but
it
comes
with
a
risk.
C
If
you
just
take
it
and
use
it
without
an
additional
effort,
spot
instances
or
primitive
instances
depending
on
a
cloud
and
how
it's
named,
we
can
be
taken
away
from
you
at
any
time
and
depending
on
a
cloud
you'll
get
indication
on
like
you'll
get
it,
you
can
get
an
event
and,
depending
on
a
cloud,
it
will
be
taken
like
in
two
minutes
in
60
seconds
or
in
30
seconds,
but
in
the
developer
community
and
like
there's
I
saw
quite
a
few
tools
which
helped
with
that,
and
you
can
even
code
it
yourself
easily
inside
every
node,
there's
a
metadata
endpoint
and
clouds
and
the
the
event
I
was
telling
you
about
that.
C
You'll
get
that
event
in
in
some
Advanced
time
is
being
pushed
to
the
node.
So
we
can
listen
on
it
and
act
upon
it.
So,
for
example,
there's
a
one
generous
in
AWS,
it's
two
minutes.
So
if
you
get
the
event
you
spawn
new
instance
instantly,
you
can
get
it.
You
can
get
even
a
new
instance
faster
than
you.
The
original
one
is
being
taken
away
from
you.
C
Google
gives
only
30
seconds,
so
it's
a
bit
more
tricky.
Microsoft
is
around
minute
or
90
seconds,
so
you
know,
but
there
are
ways
how
to
go
around
that
and
have
a
reliable
and
way
of
getting
those
90
savings,
especially
and
the
workloads.
Maybe
that
are
not
that
critical
or
you
know
you
can
experiment
with
that
and
if
you
start
to
feel
comfortable
that
works
for
your
application.
C
I've
seen
development
clusters
or
specific
job
clusters,
which
are
running
100
on
spot
instances
and
I'll,
give
some
use
case
examples
at
the
end
of
a
talk.
C
So
there's
this
spot
drift
and
it's
it's
not
financially
like
you
get
a
spot
instance
yeah,
90
percent
and
after
a
few
days
it
price
increases
and
why
you
need
to
understand
that
spot
Market
is
like
you
can
call
it
like
a
stock
market
or
it's
you
know
it's
a
market,
it's
a
demand,
there's
a
demand,
and
that
sets
the
prices.
C
So
if
there's
more
demand
data
centers
give
it
for
list
price
for
on
demand,
there's
less
pot,
we
increase
the
price,
and
you
know
economics
or
I,
don't
know
how
to
call
it.
But
you
you
know
what
I'm
saying
so
the
you
need
to
remember
that
spot's
price
is
dynamic
and
you
need
to
keep
tabs
on
that
on
that
as
well.
To
have
a
results
you
expected
when
you
started
using
them.
C
So
there's
an
example
of
a
simple
example
that
you
have
a
kubernetes
cluster,
which
is
multi-zone
electrons
between
two
zones,
and
you
have
a
deployments.
You
may
specifically
say
that
we
are
positive,
would
run
and
each
zone,
so
you
would
have
an
HS
set
up.
You
feel
safe
and
if
one
zone
goes
down
an
hour
will
work
unless
it's
in
the
same
building
and
you
get
a
flat.
C
If
you
know
what
I'm
talking
about
so
and
by
by
default.
Kubernetes
like
doesn't
account
for
this.
When
it's
routing
the
network,
you
could
get
like
random
calls
between
zones
and
even
your
application
might
not
send
a
lot
of
data
outside
the
data
center.
C
There
might
be
just
chat,
pods
simple
example,
database
and
back-end
service
right,
if
you're,
managing
your
database
yourself
or
to
backend
services
and
we're
chatting
between
each
other,
and
this
can
generate
the
same
cost
in
your
bill.
As
for
the
compute,
you
use,
depending
on
the
amount
of
traffic,
but
you
know
computers
are
computers.
You
can
do
a
lot
of
things
with
them,
and
people
are
smart
so
and
we
have
smart
people
in
kubernetes
community
so
like
in
124.
C
There
was
a
new
beta
feature
released
called
the
topology
routing
hints
where
you
can
set
up
a
base
if
all
the
where's,
the
documentation,
kubernetes
I'll,
show
you
in
the
next
slide,
how
it
looks
so
you
could
read
around
later
on,
but
the
main
key
learnings
from
it
is
that
if
all
the
conditions
are
right
like
it's,
not
the
services
are
not
overloaded,
ABCD
and
there's
many
conditions.
C
Your
pods
will
be
routed
to
the
pods
in
the
same
Zone,
so
this
is,
as
I
said,
not
like
very
strictly
controlled,
but
if
all
conditions
are
met,
a
happy
path
scenario,
one
seeing
these
costs
at
all
so
keep
an
eye
on
that
and
there's
a
lot
of
movement
around
the
space
currently
in
in
the
industry.
It's
quite
fresh
people
start
realizing
that
you
know
this
is
a
maybe
20
percent
of
your
bill
and
so
on.
C
It's
and
start
optimizing
on
it
so
expect
new
technology
coming
out
in
this
field
soon,
and
this
is
how
the
feature
is
called
you
can
check
it
out
and
how
to
configure
it
and
further
on
save
some
money.
C
So,
as
I
said,
some
example
use
cases
for
event
of
a
talk,
we
actually
use
a
cicd
cluster
and
we
build
it
for
our
own
dog
food
in
our
own
technology
and
putting
our
CD
cluster
on
steroids
and
not
paying
for
for
it
a
lot.
So
what
we
did
there,
we
selected
very
beefy
machines,
fast
local
storage,
a
lot
of
resources
on
spot,
so
we
and
we
automated
in
gitlab,
that
we
run
our
owners
Runners
inside
the
cluster
and
then
a
developer
raises
a
PR.
C
It
will
spawn
jobs
which
will
create
pods
in
that
cluster.
Once
we
finish
it,
the
nodes
will
be
deleted.
So
it's
usually
when
there's
no
jobs,
there's
no
nodes.
Everything
is
automated.
Nobody
needs
to
think
about
nothing.
Anything,
a
low
cost,
high
performance,
so
perfect
use
case,
machine
learning
jobs.
C
We
had
one
customer
who
is
who
was
running
like
tens
of
thousands
of
jobs
at
a
time
and
one
run
would
cost
them
like
500
bucks,
and
it
would
run
for
three
hours,
something
like
that.
So
we
help
them
move
those
jobs
into
spot
instances.
C
So
it's
not
only
cost
savings
but
performance
as
well
Dynamic
workloads
with
HPA,
for
example.
We
talked
with
one
participant
in
a
crowd.
Before
the
talk
that
you
know
you
can
have
batteries,
business
metrics
based
on
which
you
can
configure
HPA,
and
if
you
have
a
power,
powerful,
Auto
scaling,
sophisticated
Auto
scaler,
which
could
you
know
add
the
nodes
at
the
time
it
needed.
You
can
compact
later
on
the
pods
as
well.
You
can.
C
Your
cluster
can
can
breathe
out
and
breathe
back
to
the
same
size,
and
you
would
run
at
this
performance
you
would
like,
but
you
wouldn't
have
it
for
30
percent
of
a
minimal
usage.
You
wouldn't
need
that
70
percent
overhead,
if
you
wanted
to
account
for
the
peak
hours.
So
that's
a
very
good
tandem
of
HPA
and
development
environments
is
also
like
very
Dynamic.
It's
usually
used
by
the
working
hours
and
if
you're,
not
keeping
Taps
on
that,
it
could
cost
a
lot
so
to
wrap
this
up.
C
Or
not,
orders,
but
tasks.
We
need
to
save
costs,
go
on,
you
know,
and
you
cannot
code
anymore
right
now.
This
is
the
important
thing,
but
what
I
want
to
say
that
optim,
optimizing
and
Engineering
the
efficiency
in
your
infrastructure
and
later
on,
seeing
how
it
works
automatically,
and
you
don't
need
to
do
anything
and
you
can
hold
all
the
time
and
your
management
is
happy
with
the
bills
you
get,
or
maybe
it
saved
you
as
much
money
that
you
don't
need
to.
Let
people
go
like
recently.
C
C
C
C
How
as
many
times
you'd
like
right,
so
that
particular
use
case
was
it
cost
a
lot
because
of
how
horizontally
where
we
are
tens
of
thousands
of
jobs,
and
it
was
because
of
the
data
sets
they
needed
to
do
like
many
different
individual
calculations
and
we
needed
to
be
done
separately,
but
we
are
ran
like
Maybe
two
times
per
week,
sometimes
maybe
more
often
and
buying
your
hardware
and
storing
it
and
maintaining
it,
but
not
using
it
the
whole
time.
C
You
know
it's
also
doesn't
make
sense,
even
if
they
would
run
those
jobs
like
24
7
in
that
case
yeah.
Maybe
it's
worth
doing
that,
but
when
you're
constrained
to
that
number
of
metal
machines,
what
happens
when
you
need?
You
know
scale
a
bit
and
you
have
more
jobs
than
you
need
to
okay,
we
need
another
machine,
I'll
order
that
we'll
have
it
in
two
weeks.
So
you
know,
there's
all
there's
no
right
answer,
but
you
should
weigh
on
your
use
case
and
think
about
what
may
be
laying
ahead.
C
C
Yeah
so
question
not
question,
but
ask
is
for
provide
some
examples
of
how
monetary
sources
inside
kubernetes
so
metric
server.
You
know
there
is
a
like
I
I,
couldn't
call
it
NATO,
but
it's
Some
Cloud
providers
Azure
comes
straight
out
of
the
box
with
it
for
sure
for
AWS.
Maybe
you
need
to
install
it
as
I.
C
Also,
you
can
have
Prometheus
scraping
each
node
with
node
exporter
and
you
can
ingest
those
metrics
in
your
Prometheus
and
build
some
dashboards,
not
even
build
it.
Just
install
a
template,
there's
ton
of
them.
So
actually
it's
like
an
hour's
task
to
start
monitoring
your
usage.
C
C
C
C
So
kubernetes
have
quite
some
different
scenarios
and
how
you
can
utilize
that
first,
like
the
most
simple
ones
where
node
selectors,
based
on
labels
and
but
node
affinities
and
anti-affinity
stains
I
would
say
you
know
you
can
guide
the
scheduler
where
to
put
your
workload
by
and
those
Affinity
scientific
affinities
may
be
strict
or
maybe
loose
based
on
scheduling,
run
time
and
so
on.
So
kubernetes
I
believe
kubernetes
have
many
tools
built
in
which
help
you
with
that
and
based
on
the
auto
scale
layer
you
use.
C
You
know
it
might
hook
up
to
those
configurations
of
your
workload
or
not.
C
In
our
case,
like
the
one
we
are
building,
we
are
complying
to
all
native
kubernetes,
tooling
and
best
practices,
and
our
goal
is
that
you
set
what
you
want
for
your
workload
and
you
don't
need
to
care
about
the
nodes
anymore,
like
our
descaler
Will,
based
on
your
I,
want
that
instance
family
and
need
those
men
that
many
resources
based
on
these
parameters,
and
there
are
many
of
them
for
sure
it
will
create
a
capacity
in
your
cluster
on
time.
C
But
the
main
point
being
kubernetes
has
these
guidances
on,
where
to
put
how
to
help
a
scheduler
put
the
workload
and,
for
example,
how
to
route
traffic
through
the
NS
with
topology
hints
which
what
it
does,
what
actually
those
topology
hints
are
doing?
We're
just
filtering
the
IP
tables
and
through
the
service
checking
which
AP
is
sitting.
You
know
in
the
same
Zone
and
just
giving
you
that
AP,
so
kubernetes
is
a
hard,
but
it's
sophisticated
and
has
all
the
tools
you
just
need
to
know
them.
C
A
All
right
guys
thanks
so
much
for
hanging
out
for
the
second
second
part
of
the
session
too
I'm
sure
it
was
a.
It
was
lovely
for
everyone
and
I
know
that
people
are
always
quite
interested
in
in
a
lot
of
the
presentations
here.
But
I
also
wanted
to
point
out
that
harness
had
a
very
cool
talk
as
well.
I
missed
sections
of
it,
but
I
was
also
like,
from
the
back
of
my
mind,
also
trying
to
pay
attention
from
time
to
time.
A
Yeah
feel
free
to
network
feel
free
to
talk
more
to
the
teams
of
cast
AI
to
kubermatic
yeah.
We
work
with
coopermatic
before
as
well
to
co-organize
kcd
earlier
and
everything,
so
we
had
a
great
time
with
them
with
those
folks
feel
free
to
talk
to
them
about
a
lot
of
this
time.
I
missed
the
demo
also
about
also
when
harness
had
submitted
his
abstract
I.
Remember
it.
A
It
was
very
cool
because
we
read
something
like
how
do
you
do
it
without
the
internet
and
I
was
like
yes,
this
is
going
in
absolutely
I.
Just
wanted
to
reiterate
that
we're
always
looking
for
volunteers,
co-organizers
and
co-hosts
within
the
community,
if
you
feel
free,
feel
if
you
feel
inclined
towards
hosting
one
of
these
or
organizing
one
of
these
please
get
in
touch.
My
name
is
Benazir,
Khan
I,
don't
know
if
you
know
it,
but
that's.
My
name.
A
I
should
have
said
that
at
the
very
beginning-
but
here
we
are
yeah,
feel
free
to
talk
to
me
today
or
we
connect
via
LinkedIn
and
keep
the
conversation
going
since
it's
our
last
Meetup
here
at
the
skin
folk
office
and
quite
a
quite
a
few
of
us,
are
quite
connected
to
it.
A
I
was
also
wondering
if
you
all
would
be
open
to
a
selfie,
maybe,
and
we
can
all
take
a
selfie,
this
side
yeah
with
with
Kinfolk
at
the
background-
and
let
me
know
if
this
sounds
good
to
you,
then,
let's
just
do
it
right
before
you
can
break
out
for
more
beers
or
more
Refreshments.
Pizzas
grab
a
box
or
two
and
yeah
keep
keep
chatting
with
rokas
and
hanis.
Thank
you.
So
much
have
a
good
evening.