►
Description
Now that you have your applications running on kubernetes, wondering how to get the response time that you need? Tuning applications to get the performance that you need in kubernetes can be challenging. At the same time, there are a number of kubernetes features that, when used in the right way can go a long way to get the most of the underlying hardware resources. This talk looks into each and every aspect of optimizing a kubernetes cluster, starting from the most basic node affinities to advanced methods such as tuning microservices, each with examples and a demo. We will also be specifically looking at tools that help to not only right size your containers but also optimize the runtimes.
A
Hi
all
thanks
for
joining
our
first
session
for
today
is
optimizing
application
performance
on
kubernetes
by
then
guntala
thanks
and
a
brief
introduction
about
then
occur.
Dinakar
is
an
architect
of
the
cruise
project.
Danica
is
focused
on
autonomous
performance
tuning
and
exploring
the
usage
of
machine
learning
and
Hyper
parameter.
A
So
a
brief
introduction
about
the
topic
that
we're
going
to
talk
today
so
now
that
you
have
applications
running
on
kubernetes
wondering
how
to
get
the
response
time
that
you
need
tuning
applications
to
get
the
performance
that
you
need
in
kubernetes
and
can
be
challenging
at
the
same
time.
There
are
a
number
of
kubernetes
features
that
we
used
in
the
right
way
can
go
a
long
way
to
get
the
most
of
underlying
Hardware
resources.
B
And
thank
you
for
attending
this
session.
This
is
I
work
at
Red
Hat,
where
my
primary
job
is
to
see
how
runtimes
such
as
Java
can
be
made
to
run
better
in
kubernetes,
and
that
is
what
I'll
be
talking
about
today.
B
B
So
let
me
take
a
moment
to
Define
what
I
mean
by
performance.
Traditionally
performance
looks
at
three
key
aspects:
throughput
response,
time
and
utilization
of
system
resources.
These
are
the
criteria
that
we'll
be
looking
to
optimize
in
today's
presentation
as
well.
However,
I'll
be
confining
myself
only
to
compute.
B
B
As
you
can
see,
it
has
many
microservices
a
couple
of
databases
and
each
of
the
micro
Services
written
in
a
different
language
and
framework.
The
user
is
having
a
slow
response
time
while
doing
a
flight
booking.
Now
it
is
up
to
the
SRE
or
the
it
admin
to
try
and
make
the
user
experience
better.
So
let
us
look
at
in
detail.
What
are
the
steps
that
an
SRE
can
take
to
try
to
solve
this
problem?
B
So
the
first
aspect
to
be
considered
is
observability.
This
is
something
that
is
very
key.
How
closely
we
observe
this
system
and
all
of
the
metrics
Associated
will
actually
help
us
to
determine
where
the
performance
bottlenecks
are
and
how
to
go
about
tuning
them.
There
are
a
number
of
tools
out
there
that
can
help
you
to
get
better
metrics,
Prometheus
and
grafana
are,
for
example,
a
couple
of
the
more
popular
ones.
B
I
would
also
suggest
that
you
can
take
a
look
at
open
Telemetry,
which
is
you
know,
slowly
becoming
industry
standard
when
it
comes
to
observability.
One
of
the
key
things
in
observability
is
the
granularity
of
observation.
For
example,
if
you're
observing
the
pods
on
a
per
second
basis,
then
you
get
very
accurate
information,
but
that
causes
a
higher
overhead
both
in
terms
of
CPU
network
activity
and,
in
fact,
disk
space
as
well.
So
there's
a
trade-off
here
and
you
need
to
be
very
careful
in
setting
that
value.
B
Another
aspect
to
consider
would
be
to
export
additional
operational
metrics
on
a
per
application
basis.
Things
like
the
spring
actuator
or
the
micrometer
for
quercus
prom
client
for
node.js
can
be
turned
on
for
your
application
and
they
provide
additional
runtime
related
metrics
such
as
the
Heap,
which
we
can
see
later,
you
know,
can
be
used
to
tune
the
application
for
better
performance.
B
When
you
have
an
on-prem
Cloud,
you
have
the
luxury
of
tuning
the
hardware,
all
the
way
from
the
BIOS
in
each
of
your
kubernetes
nodes.
A
common
setting
found
in
BIOS
relates
to
the
choice
of
performance
of
power.
Choosing
power
means
you
get
better
power
savings
but
variable
performance,
the
same
setting
bubbles
up
into
the
operating
system
or
the
hypervisor
as
well.
In
the
case
of
Linux,
it's
called
as
the
scaling
governor.
B
B
The
other
thing
to
consider,
or
at
least
be
aware
of,
is
to
look
at
hyper
threading
or
not
considered
hyper
threading
while
doing
capacity
planning.
Let
us
say
a
server
has
16
cores
and
two
threads
per
core
that
is
counted
as
32
CPUs.
However,
hyper
threaded
CPUs
give
at
most
a
20
boost
over
a
single
core,
and
so
it
is
best
to
ignore
this,
while
calculating
capacity
now
that
our
hypothetical
SRE
has
set
up
observability
and
has
fixed
the
hardware.
B
What's
the
next
step,
let's
start
simple
match
the
application
to
the
hardware
features
that
is
needed
by
the
application,
so
node
Affinity
is
typically
accomplished
by
setting
the
right
labels
to
a
node
in
a
kubernetes
cluster.
It
is
very
useful
if
you
want
to
assign
pods
to
a
specific
Hardware
feature
on
the
Node,
or
maybe
the
node
is
reserved
for
a
particular
type
of
workload
or
namespace
or
a
security
constraint.
In
this
example,
we
see
that
this
particular
pod,
which
is
a
ml
application,
will
only
run
on
nodes
that
have
the
GPU
label.
B
Another
way
to
constrain
Parts
is
to
use
pod
affinity
and
part
anti-affinity.
If
there
are
parts
that
commonly
communicate
together-
or
maybe
they
share
some
common
resources,
then
it
makes
sense
for
them
to
run
on
the
same
node.
We
can
use
port
Affinity
rules
to
make
sure
that
they
all
run
on
the
same
node.
But
what,
if
you
don't
want
pods
from
one
application,
a
to
run
on
a
node?
If
you
have
pods
from
application
B
running
on
that
node,
maybe
both
applications
are
network
heavy
or
both
use
GPU
extensively.
B
Whatever
may
be
the
case,
we
want
to
make
sure
that
pods
from
application
and
application
be
run
on
different
nodes.
So
in
that
case
we
can
use
pod
anti-daffinity
to
make
sure
that
they
both
don't
run
on
the
same
node.
In
this
example,
we
want
a
pod
to
be
scheduled
on
a
node
only
if
there
are
other
pods
that
have
the
same
security
policy
S1
and
don't
want
to
schedule
it
on
a
node
that
is
running
pods
that
use
a
different
security
policy
S2.
B
There
are
also
other
scheduler
mechanisms,
such
as
chains
and
tolerations
pod
priority
that
you
can
explore
as
well.
In
this
particular
context,
now
we
come
to
the
most
important
aspect
of
performance
tuning
in
a
kubernetes
cluster
right.
Sizing
application
greatly
helps
to
get
the
best
possible
performance,
so
this
is
done
primarily
by
setting
the
CPU
and
memory
requests
and
limits.
B
B
It
is
very
important
as
a
best
practice
to
always
specify
the
resources
to
enable
kubernetes
to
make
the
best
possible
scheduling
decisions.
This
usually
means
that
you
have
to
either
set
the
guaranteed
or
the
bursible
qos
class
and
avoid
the
best
effort,
which
means
that
you're,
you
know
in
best
of
what
you're
not
setting
anything
at
all.
B
To
I
mean
one
thing
that
we
do
need
to
make
sure
is
that
for
the
best
possible
performance,
we
need
to
set
the
requests
to
cover
the
consistent
Peaks
that
we
observe
and
the
limits
should
be
set
to
handle
any
spikes,
so
do
ensure
that
the
limits
are
high
are
set
high
enough
during
observation
itself.
To
prevent
any
throttling
also
do
ensure
that
requests
and
limits
that
you're
setting
do
not
clash
with
any
limit
ranges
that
might
apply
to
your
namespace.
B
Now
that
we
know
that
request
and
limits
are
crucial
to
your
performance,
you
might
have
a
question:
how
do
I
arrive
at
the
optimal
values
for
request
and
limits
accurately?
The
vertical
part
Auto
scalar
can
help
in
that
regard,
but
I
suggest
you
use
the
cruise
tool,
which
I
will
talk
about
in
a
minute.
B
So,
for
example,
when
a
GC
is
triggered
in
Java,
so
this
might
actually
cause
a
new
pot
to
be
instantiated.
Instead
of
when
actual
load
is
increased.
B
You
can
also
use
external
metrics,
such
as
the
number
of
concurrent
users
that
your
application
is
handling,
but
the
best
practice
is
to
use
objects
that
are
known
to
kubernetes
as
much
as
possible,
such
as
you
know
the
packets
per
second
or
requests
per
second.
So
in
this
particular
case,
what
we
are
saying
is
that,
if
the
packets
per
second,
the
average
value
of
the
packets
per
second,
goes
beyond
1K,
then
start
a
new
part
or
in
this
particular
case
a
request
per
second
goes
beyond
10K.
B
Then
we
start
another
part
and
so
on
using
a
cluster
autoscaler.
You
know
definitely
helps
to
make
the
best
utilization
of
the
underlying
resources,
especially
when
you're
scaling
down
you
make
sure
that
you
free
up
the
resources
that
are
not
being
used,
but
you
need
to
be
very
careful
not
to
cause
any
service
disruption
in
the
process,
especially
you
know,
when
you're
down
scaling
specifying
the
max
unavailable
pods
in
the
Pod
disruption
budget
definitely
helps
in
this
particular
regard.
B
So
now,
if
you're,
an
SRE
you'll
know
that
every
runtime
has
many,
many
tunables
so
Java,
for
example,
has
more
than
100
of
them,
but
you
will
also
know
that
you
should
never
touch
them.
Why?
Because
you
know
that
you
know
who
knows
what
kind
of
an
impact
it
has
and-
and
you
know
it
has
all
these
dependencies
on
other
tunables
and
they're
they're
just
way
too
many
of
them
for
you
to
manually
test
and
figure
out.
B
Also
how
these
runtimes
behave
in
kubernetes
environments
is
not
always
clear,
so
guess
what
most
of
the
time
an
SRE
is
just
limited
to
tuning
the
app
itself
or
tuning
just
the
CPU
and
memory
by
tuning.
You
know
we
all
know
what
normally
happens.
We
just
end
up
doubling
the
resources
until
the
problem
goes
away,
so
it.
C
B
B
So
if
you're
thinking,
there's
got
to
be
a
smarter
way,
you're
absolutely
right.
So
we're
really
happy
to
announce
that
we
are
having
you
know.
We
have
this
new
tool
called
Cruise
autotune,
it's
available
publicly.
It's
an
open
source
project
from
we
are
from
Red
Hat
I.
Do
encourage
you
to
take
a
look
at
our
GitHub
repo
given
below
here.
B
So
let's
take
a
deep
dive
into
the
whole
process
that
autotune
uses
to
tune
an
application.
So
the
first
step
here
is
that
the
SRE
encapsulates
all
of
the
performance
requirements
into
an
objective
function,
which
is
an
algebraic
expression
such
as
you
know,
a
square
divided
by
B
plus
C,
where
maybe
a
can
be
your
throughput.
B
A
b
can
be
a
response
time,
C
can
be
costs
and
you
want
to
either
you
know
maximize
or
minimize
the
whole
thing
in
this
particular
case,
for
example,
if
it
is
a
square
divided
by
B
plus
C,
you
might
want
to
maximize
it,
and
here
each
of
the
individual
variables
of
the
objective
function
are
specified
as
Prometheus
queries
and
the
whole
thing
is
applicable
to
a
particular
kubernetes
deployment,
which
can
be
selected
using
the
selector
out
here.
B
So
at
the
heart
of
the
autotune
is
the
Bayesian
optimization,
which
is
provided
by
the
HBO
service
that
you
see
here.
Hp
is
nothing
but
the
hyper
parameter,
optimization
service,
so
Bayesian
optimization
is
a
type
of
black
box,
optimization
that
uses
probabilistic
models
of
the
objectives
function
that
that
you
have
specified
here
and
that
is
searched
efficiently
to
arrive
at
either
the
global
maximum
or
the
minimum,
as
required.
B
So
essentially,
What's
Happening
Here
is
that
you
know
the
Basin
optimization
gives
you
a
configuration
for
you
to
try
out
for
this
particular,
you
know
the
deployment
so
it
so.
We
have
figured
out
what
are
the
layers
of
the
application.
What
are
the
layers
of
the
stack
and
and
then
send
all
of
the
tunables
from
those
layers
to
the
base
in
optimization,
which
gives
you
a
particular
config
value
to
try
out
the
experiment
manager
here,
deploys
it
and
then
we
get
a
response.
B
B
B
So,
let's
take
a
quick
look
at
how
this
works,
so
I
do
have
a
small
demo
out
here,
so
I
have
mini
Cube
running
on
my
laptop
here,
as
you
can
see,
and
it
has
Prometheus
and
grafana
installed
in
the
mini
Cube
cluster
and
I
also
have
auto
tune
running
here
and
I
have
a
tech,
cam
power
application,
which
is
a
quarkus
rusty,
Z
hibernate
application
that
is
also
running
here
in
the
cluster
and
so
now.
The
challenge
here
is
to
try
and
optimize
this
particular
Benchmark.
That
is
running.
B
So
what
are
we
trying
to
optimize?
We
are
trying
to
optimize
the
response
time
to
try
and
minimize
it.
So
response
time
is
defined
as
request
some
divided
by
request.
Count
where
request
sum
is
this
particular
query,
Prometheus,
query
and
request
count
is
this
particular
from
this
query,
and
this
applies,
of
course,
to
the
tech
Empower
deployment,
and
we
are
trying
to
minimize
response
time
here.
So
let's
try
to
apply
this
CML
here
and
you
can
see
that
autotune
starts
to
deploy.
B
B
You
know,
monitoring
the
load,
but
just
you
know
giving
you
a
sense
of
how
the
whole
process
works.
So
you
can
see
here
that
it
is
starting
multiple
trials
and
you
can
also
take
a
look
at
list
experiments
here
to
see
the
configs
that
it's
actually
trying
out
so
here
you
can
see
it
is
trying
with
certain
values
of
CPU
and
memory,
and
also
Java
options
that
includes
the
hotspot
layer
that
it
has
found
and
the
quarkus
layer.
So
you
know
very
quickly.
B
You
can
also
look
at
all
the
layers
that
it
is
found
in
the
application
here,
so
it
is
found
the
base
container
hotspot
and
quarkus
and
so
on.
So
if
you
keep
monitoring
this,
it
you
know
runs
the
whole
set
of
Trials
and
then
comes
up
with
the
best
trial
at
the
end
of
the
experiment
to
say
you
know,
this
is
the
one
that
had
the
best
configuration
and-
and
you
can
take
a
look
at
you
know
the
trial.
According
to
that
particular
trial,
number
to
figure
out
what
was
the
best
configuration.
B
So
so,
that's
you
know,
a
very
quick
demo
of
auto-tune
I
would
definitely
recommend
that
you
check
out
our
GitHub
repos
and
we
have
this
demo.
Also
running
I
mean
available
on
public
GitHub,
so
it
is
available
in
this
particular
repo
github.com.
B
This
is
the
one
that
I
was
running
just
now.
You
should
be
able
to
even
run
it
on
your
own
laptop
as
well,
and
this
is
the
the
main
GitHub
repo.
So
now
that
you
know
you've
seen
a
very
quick
demo
what's
really
happening
here
is
that
you
know
the
Bayesian.
Optimization
is
quickly
coming
to
try
and
you
know
find
a
particular
config
that
gives
you
the
best
result.
I
usually
compare
the
Bayesian
optimization
to
a
journey.
B
However,
there's
one
caveat
here:
the
Jenny
can
only
be
asked
for
one
wish,
so
you
can
invoke
the
journey
any
number
of
times,
which
means
that
you
can
invoke
the
Bayesian
optimization
a
for
any
number
of
experiments,
but
for
every
experiment
that
you're
running,
which
consists
of
maybe
even
up
to
100
trials.
There
can
only
be
one
objective
function
or
only
one
wish,
so
you
need
to
be
really
get
creative
with
your
wish.
B
You
know
it's
something
like
I
want
to
be
on
a
beach
in
Hawaii,
with
my
wife
and
kids,
and
walk
into
my
large
house
with
this
great
internet
and
so
on.
So
basically
you're
trying
to
put
in
all
of
your
requirements
into
that
one
objective
function
and
then
the
Bayesian
optimization
will
try
to
optimize
for
that
particular
objective
function,
so
so
you've
heard
all
of
the
theories
so
far.
So
let's
take
a
look
at
some
of
the
results.
B
So
here
you
see
that
I
mean
as
I
mentioned.
Actually
we
were
using
the
tech
Empower
framework,
which
is
actually
an
industry
standard
framework
where
you
have
benchmarks
from
you
know
for
all
different
kinds
of
runtimes.
You
know
Java
golang,
rust
and
node
tests,
you
name
it
so
we
specifically
picked
up
the
quarkus
rest,
easy
Benchmark
and
ran
this
on
a
openshift
cluster.
B
You
know
which
had
this
particular
configuration-
and
you
know
it
had
all
of
these
different
tunables-
that
we
used
are
two
tunables
at
the
container
layer,
a
bunch
of
tunables
for
the
hot
spot
layer
and
a
few
for
the
quarkus
layer
as
well,
and
so
these
were
the
ranges
within
which
they
were
operating
and
we
had
set
the
kubernetes
requests
to
be
the
same
as
the
limits
and
we
were
using
the
given
GC
garbage
collector
and
maximum
percentage
set
equal
to
70..
B
The
incoming
load
was
constant
at
just
512
users,
so
we
started
off
initially.
You
know
saying
that.
Okay,
we
want
to
just
minimize
the
response
time,
but
then
we
quickly
realized
that
you
know,
as
I
mentioned,
Bayesian
optimization
only
tries
to
optimize
that
one
aspect
at
the
cost
of
maybe
other
aspects.
So
we
realized
that
the
low
response
time
came
at
the
cost
of
you
know
higher
CPU
usage.
B
Then
we
did
another
experiment
where
we
said:
okay
fix
the
CPU
Sage,
but
you
know
give
me
lower
responses
time,
but
this
time
we
found
out
that
it
was
giving
us
higher
Max
response
times
or
daily
latencies.
So
this
was
the
third
take
where
we
said:
okay
Jenny,
you
know,
give
me
the
best
response
time,
the
lowest
response,
time
height,
throughput
and,
at
the
same
time
keep
the
max
response
time
or
the
tail
latency
is
down
and
keep
the
resources
fixed.
So
essentially
we
gave
it
weightages
as
well.
B
We
said
response
time
has
the
highest
weightage
throughput
comes
next
Max
response.
Time
is
the
you
know,
least
in
terms
of
the
weightages,
and
make
sure
that
you
fix
the
CPU
and
memory
so
that
the
cost
is
the
same.
So
so
you
know
the
zeroth
value
here.
It
corresponds
to
the
default
one
where
there
were
no
changes
done
to
the
application
configuration
with
the
same
resources
as
the
rest
of
the
experiment,
and
so
here
we
see
that
it
is.
B
You
know
the
default
was
about
14.21
milliseconds
of
response
time,
and
then
we
see
the
the
the
auto-tune
coming
up
with
different
configurations
and
trying
them
out,
and
then
we
got
the
best
configuration
around
the
97th
trial
where
it
got
a
response
time
of
2.39
milliseconds.
B
So
you
can
see
here
that
we
I
mean
this
is
actually
achieved
about
83,
better
response
time
with
a
small
or
or
the
throughput
being
almost
the
same
and
of
course,
the
tail
latencies
were
low
as
well.
So
you
can
take
a
look
at
all
of
these
results
here.
These
are
available
on
github.com
Cruise,
slash,
autotune
results
repo.
These
are
available
publicly
as
well.
So
you
can
see
that
the
max
response
time
in
the
auto-tune
case
is
down.
The
CPU.
B
Sage
is
almost
the
same
as
the
default,
and
we
got
a
really
good
response
times
about
83
percent,
better
response
time
as
well,
and
we
also
calculate
the
cost
of
the
hardware
by
looking
at
the
data
that
we
got
from
the
previous
experiment
for
both
the
default
and
the
autotune
config
we've
measured
how
many
instances
it
would
take
to
handle
one
million
transactions
and
applied
it
on
a
matching
AWS
configuration
aw,
A1,
dot
extra
large,
which
has
about
four
core
eight
gig,
and
we
observed
with
the
auto
tune
config
there's
a
eight
percent
reduction
in
cost
as
well.
B
So
this
is
the
corresponding
best
configuration
the
right
side.
Column
in
the
is
is
the
value
for
each
of
the
tunables
that
we
saw
previously.
Interestingly,
you
see
that
auto-tune
has
flipped
some
of
the
defaults
from
what
the
runtime
itself
sets.
B
Okay.
So,
in
summary,
if
you
are
an
SRE,
you
know
your
first
step
is
to
set
up
observability,
don't
forget
to
tune
the
hardware
set,
the
node
and
Port
affinities
ensure
requests
and
limits
are
set
for
all
app
parts
and
they're
right
size
use
app
specific
scaling
metrics
if
possible,
ensure
that
there
is
no
disruption
with
the
Pod
disruption
budget
and
please
do
check
out
the
cruise
autotune
for
autonomous
tuning
and
and
we
do
plan
to
come
back
to
you
with
some
updates.
B
So,
lastly,
do
check
out
the
cruise
GitHub
repos.
You
have
any
questions,
reach
out
to
us
on
crew
slack
or
send
us
a
mail.
We
do
look
forward
to
hearing
from
you
all.
Thank
you
so
much
for
listening.
A
Hey
thanks
a
lot
for
the
session
and
it
was
really
informative
and
very
helpful
and
I
hope.
The
participants
are
benefited
by
this
very
informative
session
and
if
you
have
any
questions
you
can
pose
the
questions
on
the
chat
and
dinakar
is
available
to
answer.
C
A
Sorry
yeah
session
and
it
was
really
informative
and
very
helpful
and
I
hope.
The
participants
are
benefited
by
this
very
incremental
session
and
if
you
have
any
questions
you
can
pose
the
questions
on
the
chat
and
denica
is
available.
C
Thank
you
Ashok
happy
to
answer
any
questions.
Books.
C
Oh
sorry,
yeah.
A
Looks
like
there
are
not
much
questions.
You
know,
questions
on
the
chat
that
I
see,
but
I
have
one
question
say
if,
if
like
say,
if
there
is
a
fresh
grad
who
would
like
to
get
into
this
open
source
space
like
what
is
your
recommendation
that
you
would
like
to
give,
or
maybe
someone
who
would
like
to
switch
their
career
path,
maybe
after
10
years
of
experience,
and
if
you
think
that
okay,
they
would
like
to
switch
their
career
part.
E
I
I
think
that's
a
general
question,
so
I
would
say
that
you
know
my
the
first
step
that
I
would
always
suggest
is
for
people
to
understand
what
are
their
own
preferences.
You
know
there
is
like
a
wide
variety
of
Open
Source
software
that's
available
today.
I
mean
there's
system
software,
front-end
back-end.
E
You
know
a
machine
learning
cloud
and
so
on.
So
so
there's
multiple
different.
You
know
open
source
projects
that
are
available
and
I
think
that
the
best
way
is
to
First
understand
what
are
your
own
interests
and
then
find
out
projects
in
that
particular
space.
So,
for
example,
you
know
I'm
a
guy
who's
been
interested
in
Systems
Technology.
All
my
life
I've
worked
in
operating
systems,
jvm
and
kubernetes
now
and
so
on.
E
So
I
always
tend
to
look
around
in
this
space
and
see
what
are
the
new
things
coming
up
and,
of
course,
now
these
days
I'm
interested
in
machine
learning.
Who
is
not
right?
Everybody,
that's
the
buzzword
now
so
and
if
you
look
around
I
I'm
sure
you'll
find
an
open
source
project
that
you
are
interested
in.
So
that's
the
first
step
next
step
is
to
find
out
what
is
the
community
around
it
find
out
why
you
know
who
are
the
different
like
stakeholders?
Do
they
have
a
slack
channel?
Do
they
have
guitar?
E
Is
there
a
mailing
list
so
go
join
there
find
out,
what's
the
best
way
to
interact,
you
know,
look
at
GitHub,
obviously
gitlab
or
anything.
That's
around.
Look
at
issues
most
of
the
projects
these
days
have
something
like
a
good
first
issue
that
has
been
marked
on
GitHub
issues.
So
you
look
at
that
and
see
you
know
what
are
the
issues?
It
could
be
simple
things
like
fixing.
E
You
know
language
or
or
maybe
some
simple
issues
and
so
on.
So
you
can
look
at
that
and
see
if
you
can
start
getting
into
the
project
by
understanding
the
process.
How
do
you
submit
a
PR?
You
know
basics
of
GitHub
and
things
like
that,
and
then
you
can.
You
know,
read
more
on
the
topic.
Look
at
videos
and
talk
to
experts.
E
A
Sure,
thanks
thanks
that
answers
my
question
and
I,
like
the
red
hat.
On
your
backdrop,
thank.
C
A
Yeah
folks,
please
make
use
of
this
time
for
Q
a
and,
if
you
don't
have
any
questions,
then
we'll
have
dinner
go
so
that
they
can
enjoy
his
rest
of
the
weekend.
It's
already
Friday.
A
C
E
You
all-
and
it's
been
a
pleasure
being
in
this-
thank
you
so
much
for
organizing
this
great
event.
C
C
A
Have
the
next
session
starting
in
few
minutes,
so
please
hang
in
there
thanks.