►
Description
Join us for the #DevOps track at #GitHubConstellation India 2022. Visit githubconstellation.com
----------------------------------------------------
As always, feel free to leave us a comment below and don't forget to subscribe: http://bit.ly/subgithub
Thanks!
Connect with us.
Facebook: http://fb.com/github
Twitter: http://twitter.com/github
LinkedIn: http://linkedin.com/company/github
About GitHub
GitHub is the best place to share code with friends, co-workers, classmates, and complete strangers. Millions of people use GitHub to build amazing things together. For more info, go to http://github.com
B
Hey
everyone.
I
hope
you
all
enjoyed
day,
one
I'm
sure
you
enjoy
you're
going
to
enjoy
day
two
as
well,
but
for
me
day
one
was
super
crazy.
It
was
making
me
nuts
making
me
go
nuts
I
was
seeing
like
mohit
and
dana.
Do
their
hosting
act,
changing
dresses,
props
their
increased
energy
levels.
It
was
like
too
much
for
me
to
handle.
So
I
went
to
manish
our
general
manager
and
I
said,
like
you,
need
to
pull
me
out.
This
is
super
super
super
crazy.
B
B
In
fact,
I
got
levitated
a
little
bit
and
then
he
asked
manish
manish.
Are
you
sure,
like
you
never
believed
in
me?
So
much
then
manish
said,
like
you
know,
to
be
honest,
like
the
others
are
working
anyway
welcome
to
day
two
of
the
devops
track
right,
I'm
my
name
is
diwaka
kusuma
right,
I'm
a
senior
customer
success
architect.
B
Basically,
what
I
do
is
like
I
make
sure
that
the
customers
are
getting
the
maximum
value
from
the
investments
they're
making
that's
my
job.
It's
super
cool
and
super
easy
in
a
way
considering
that
it
is
github,
obviously
they're
going
to
see
value
in
it
right.
So
it's
less
work
for
me,
don't
tell
my
boss,
but
that's
what
the
fact
is
anyway
with
me:
I
have
richa.
Hopefully
she
has
much
crazier
work,
schedule,
schedule
and
work
than
me
tougher
work
than
me
right,
so
I'll.
Let
her
introduce
herself
richard.
C
Thanks
divakar,
first
of
all,
a
very
good
morning
to
everyone
joining
us
today
for
github
constellation
day
two.
I
am
richard
kumar
software
senior
director
software
engineering
at
github,
and
I
focus
on
making
github
the
home
for
all
student
developers,
I'm
based
out
of
hyderabad,
and
it's
a
pleasure
to
be
here
with
divakar.
C
B
Oh,
like
I,
I'm
still
stuck
with
day
one
right
for
me
to
begin
with
nothing.
Nothing
was
amazing
right.
He
was
absolutely
telling
the
growth
story.
I
don't
know
like
how
we
could
summarize
it
in
30
minutes,
but
all
those
facts
and
figures-
that's
overwhelming
for
me,
and
it
actually
took
me
for
a
ride
like
you
know,
every
day
in
your
life
you're
using
that
upi
aadhar
and
soon
it's
going
to
be
like
that
digital
commerce
network.
B
All
of
that
we
are
going
to
witness
and
we
have
witnessed
in
our
generation-
it's
not
like
a
couple
of
decades
ago
or
something
like
all.
This
happened
now
right.
It's
amazing
that
these
initiatives
come
came
from
the
government
and
at
at
a
speed
that
you
could
not
have
imagined
in
the
past
right
and
even
some
of
the
evil,
even
moments
that
we
discussed
with
the
ministry
of
education,
like.
B
I
remember
dr
buddha's
statement
on
how
he's
hinting
on
what
to
learn
the
abcdefg
right,
the
analytics,
the
bitcoin
etc
for
wow,
I
was
thinking
wow.
That's
super!
That's
free
advertisement
for
us
right!
That's
super
cool
for
me!
What
about
you
richard.
C
Yeah,
I
actually
got
a
chance
to
catch
some
of
the
keynotes
from
this
morning
and
it's
so
exciting
to
hear
about
the
thriving
developer
community
in
india
right
and
stormy
shared
some
numbers.
We
heard
from
thomas
yesterday,
8
million
developers
on
github
in
india
and
trending
towards
10
million
in
2023
and
impact.
You
know
10
million
projects
worldwide.
If
I'm
getting
that
right
that
depend
on
packages
that
are
developed
in
india.
C
This
is
amazing.
You
all
keep
that
momentum
going
because
you
are
helping
make
india
a
leader
on
the
path
of
digital
innovation
in
one
of
the
keynotes
stormy
also
talked
about
enterprises.
Enterprises,
too,
are
now
you
know,
taking
dependencies
on
open
source
software
and
contributing
to
it
and
as
a
hubber,
I'm
so
proud
that
github
plays
an
important
role
in
facilitating
this
collaboration,
and
community
has
always
been
at
the
heart
of
github's
mission,
and
so
how
was?
C
How
is
github,
helping
developers
matthew
and
his
keynote
talked
about
some
cool
new
features
that
are
coming
our
way
that
will
enable
the
collaborative
style
of
working
help
developers
be
more
productive
and
write.
Secure
software
there
was
discussions,
copilot
code,
ql
and
codespaces,
and
we
are
going
to
be
hearing
about
all
of
these
over
today
across
all
the
three
tracks,
so
stay
tuned.
B
Wow
looks
like
richard
you're
all
caught
up
thanks
for
summarizing
that
for
us
fourth
session,
we
are
going
to
have
a
session
from
prashant
subramaniam
he's
joining
us
from
google
cloud.
His
developer
advocate
and
he's
going
to
teach
the
thing
or
two
about
how
we
can
use
ci
cd
along
with
google
cloud
right.
So
it's
going
to
be
interesting,
so
stay
stay
tuned
prashanth
over
to
you.
D
D
D
Let's
start
with
a
simple
thought:
we
see
software
all
around
us.
What
is
common
between
the
software?
That's
running
all
over
the
world
in
production
in
various
devices,
any
thoughts
all
of
this
is
written
by
a
developer.
Now,
of
course,
I
am
oversimplifying
this.
You
have
no
code
tools
which
is
which
can
produce
software.
You
have
generated
code
these
days,
however,
all
of
that
it
also
was
written
by
a
developer,
so
developers
are
the
ones
that
is
actually
building
a
lot
of
software
and
it's
important
to
take
care
of
developers.
D
Take
care
of
developers
needs
in
order
to
be
more
efficient
and
write
more
and
more
better
software.
Now,
let's
look
at
this
whole
developer
experience
from
an
outside
point
of
view.
This
is
how
the
world
sees
it.
You
have
a
developer.
The
developer
writes
some
code,
the
code
ends
up
on
a
production
server
and
you
have
an
end
user
that
is
interacting
with
the
application
either
on
a
server
or
on
a
device.
D
D
Not
all
code
that
makes
the
software
is
written
by
one
developer.
There
are
other
dependencies
in
the
source
code
which
need
to
be
put
together,
and
this
is
what
makes
the
complete
source
code
and
the
final
product
or
the
final
software
in
larger
applications.
It
is
not
just
this
one
developer,
that's
doing
this,
but
there
are
also
other
developers,
multiple
developers
who
are
working
on
the
same
source
code
and
using
all
of
these
dependencies
to
build
the
final
software
package.
D
Now
not
every
developer
or
the
contributor
are,
is
necessarily
on
the
same
version
of
the
source
code
as
well
right.
Each
of
them
is
in
a
different
stage
of
the
development
process.
They
are
working
on
their
own
features
and
in
order
to
be
working
on
the
latest
version
of
the
repository,
it
needs
some
sort
of
a
hygiene.
It
needs
continuous
integration
back
into
the
fork
or
back
into
your
branch.
D
We
recommend
frequently
pushing
smaller
changes
of
the
source
code
so
that
it's
easier
to
merge,
conflicts
and
easier
to
help
resolve
the
code
and
integrate
the
code
together.
Now,
if
this
is
not
done,
then
what
would
happen
is
if
this
hygiene
is
not
followed,
then
the
feature
branch
will
become
quickly
out
of
sync
with
the
main
source
code
and
it
will
become
a
nightmare
to
maintain.
D
Now,
then,
again,
you
have
developers
who
are
working
in
environments
that
they
are
comfortable
with.
There
are
developers
working
on
windows
on
linux,
on
mac,
using
different
tool
chains
using
different
ides
and
a
lot
of
the
times,
the
runtimes
and
the
software
package
versions
that
each
one
of
them
is
using
it's
different
when
you
go
from
one
environment
to
another,
when
the
maintainer
has
to
now
integrate
all
of
this,
they
are
left
scratching.
Their
heads
has
the
right
hygiene
been
followed
by
each
of
these
developers.
Have
all
the
tests
been
run?
D
Are
the
right
versions
of
the
dependencies
used?
These
are
questions
that
come
in
the
mind
of
the
maintainer.
Now,
of
course,
this
can
be
checked
right
at
the
pull
request
when
the
pull
request
comes
in.
The
maintenance
can
check
all
of
this,
but
it
needs
a
lot
of
effort
and
this
manual
effort
that
is
needed
and
the
ambiguity
around
this
whole
process
it.
What
makes
it
a
very
poor
experience
for
both
the
developers
and
the
contributors,
as
well
as
a
maintainer
of
the
repository.
D
D
So
what
you
need
to
think
about
is
the
piece
that
a
single
developer
touched.
Is
it
working
well
together
with
all
the
other
pieces?
While
we
have
built
integration
tests,
have
the
tests
been
run?
Have
the
tests
been
run
correctly?
Has
the
integration
environment
been
set
up
correctly
by
the
developers
before
they
push
the
code?
D
How
difficult
is
it
to
set
up
this
environment?
It
is
an
environment
where
you
have
all
different
pieces
of
the
software
that
have
to
run
together
for
every
change
that
the
developer
makes.
How
much
time
does
it
take
to
bring
up
such
an
environment
where
all
the
software
has
to
be
firstly
compiled,
built
and
then
deployed
into
an
environment
like
this,
and
then
who
is
going
to
maintain
and
troubleshoot
such
an
environment?
Who
is
going
to
monitor
that
all
the
code
is
compiling
fine,
that
everything
is
running?
D
D
Firstly,
using
a
good
source
code
management
system.
You
want
to
use
source
code
management
system
that
allows
many
developers
to
work
on
the
same
piece
of
code
on
the
same
source
code.
You
want
them
to
be
able
to
collaborate.
You
want
them
to
be
able
to
manage
different
versions
of
code
in
an
easy
way,
using
containers
use
containers
to
package
all
your
compile
time
and
runtime
dependencies
into
one
single
unit
of
deployment.
D
D
Finally,
with
all
of
this
in
place,
you
still
want
to
ensure
that
you
have
a
good
automation
pipeline,
that
you
have
a
ci
cd
pipeline.
This
will
help
take
away
all
the
mundane
work
that
a
developer
has
to
do
thus
one
taking
away
all
the
manual
effort
that
is
needed
by
the
developer
when
they
do
the
integration
when
they
do
the
testing
and
secondly,
what
if
the
developer
forgets
to
do
this
at
a
point
of
time
right
so
help
doing,
automation
takes
away
all
this
effort
and
all
the
thinking
from
the
developer
as
well.
D
D
D
D
We
will
have
one
of
the
developers
who
is
opening
a
pull
request
with
some
changes
into
this
code
repository
now,
then
using
github
actions.
What
we
will
be
doing
is,
we
will
be
pushing
the
source
code
into
google
cloud
build
now
cloud
build
is
google's
serverless,
ci
cd
platform
that
allows
you
to
build
test
and
deploy
your
code
once
the
code
is
built.
The
code
is
then
pushed
to
artifact
registry
within
google
cloud.
Artifact
registry
is
a
single
place
to
store,
manage
and
secure
all
of
your
build
artifacts,
including
your
container
images.
D
D
D
D
And
here
we
will
see
that
the
end
point
should
be
hit.
Sorry,
all
right,
yep.
So
here
we
will
see.
I
will
be
opening
the
application
that
I
just
started
and
you
see
that
the
end
point
is
hit
and
the
response
is
got
back.
This
is
the
application
that
we're
going
to
be
working
with
so
for
the
purpose
of
this
demo.
What
I
will
do
is,
I
will
go
into
the
source
code
and
I
will
make
a
change
to
the
file.
D
D
D
It
is
always
a
good
practice
to
write
some
indication
of
what
you're
committing,
but
in
this
case
I'm
going
to
skip
that
and
very
quickly
make
these
changes
so
that
I
can
show
the
demo.
This
is
also
going
to
make
the
test
case
fail.
So
I
will
also
adapt
the
test
case
accordingly,
to
check
for
what
I
sent
back
from
the
code
change.
D
D
D
D
What
we
will
be
doing
here
is,
we
will
be
checking
the
code
out.
We
will
be
pushing
this
to
google
cloud,
build,
google
build,
will
build,
the
file
will
deploy
this
into
the
artifact
registry
and
then
we
will
see
a
success
now.
I
will
jump
also
to
google
cloud
in
the
meantime
and
let
us
look
at
build.
D
D
Finally,
we
will
use
an
action
from
the
marketplace
to
write
a
comment
into
the
pull
request,
so
this
is
the
action
that
we
have
defined.
Let
us
go
back
into
github
and
see
where
we
are
with
this,
so
this
is
already
completed
in
under
the
minute.
We
see
that
it
is
successful,
it
has
been
built
and
when
it
builds,
you
will
see
that
all
the
tests
are
run,
and
this
is
where
we
see
that
the
tests
are
successful
and
hence
the
final
image
has
been
built
and
has
been
pushed
into
artifact
registry.
D
D
D
D
This
will
then
talk
to
an
existing
kubernetes
cluster
that
is
already
present.
It
will
deploy
the
image
that
is
present
in
the
artifact
registry
into
this
cluster,
essentially
updating
all
the
images
that
are
running
on
this
kubernetes
cluster.
So
what
we're
seeing
here
again
is
that
the
code
has
been
tested
compiled.
The
code
is
brought
into
the
main
branch
and
the
code
is
now
deployed
back
into
the
development
environment,
where
everyone
again
is
working
with
the
latest
code
base.
D
Now
this
approach
is
something
that
is
interesting
because,
as
I
showed
before,
when
you
have
a
large
application,
it
will
take
a
lot
of
time
to
build
the
entire
application
to
probably
spin
up
a
brand
new
cluster
and
to
push
all
the
images
into
the
cluster.
So
what
we
are
doing
here
is
we
are
having
a
cluster
that
is
already
running
and
we're
going
and
updating
the
image
within
the
same
cluster.
So
only
that
image
which
has
changed
is
rebuilt
and
pushed
into
the
cluster.
D
Going
back
into
the
pull
request
that
we
just
created,
so
the
pull
request
is
now
ready
to
be
merged.
So
here
I
will
again
just
merge
this
full
request.
We
will
confirm
the
merge.
What
we
also
see
here
is
that,
from
the
last
step,
a
comment
was
entered
into
the
pull
request,
so
we
know
that
that
action
was
also
completed
successfully.
D
Now
the
code
gets
pulled
and
merged
immediately,
because,
of
course,
it
has
been
tested
before
and
it
can
be
merged
into
the
main
branch.
However,
the
action
will
now
trigger
again
to
deploy
the
latest
version
of
this
code
into
the
kubernetes
cluster
that
already
existed.
So
here
you
see
that
this
workflow
is
now
running.
D
D
D
D
All
right,
so
the
pull
request
has
been
created
and
I
will
jump
back
into
the
action
that
we
had
before
to
see
what
status
we
are.
The
action
just
completed,
and
here
you
see
that
this
image
has
again
been
compiled.
We
have
used
the
latest
code
from
main.
We
have
built
an
image.
We
have
pushed
it
to
artifact
registry.
We
have
connected
to
the
kubernetes
engine
and
deployed
this
latest
image
to
the
existing
kubernetes
cluster.
D
We
will
see
that
a
minute
ago
we
had
a
new
build
and
this
build
is
successful.
Now
the
artifact
registry
also
helps
by
scanning
all
the
images
that
are
present
here
for
vulnerabilities,
so
we
can
use
things
like
binary
authorization
to
ensure
that
only
images
which
don't
have
any
vulnerabilities
are
made
available
in
production.
So
these
are
additional
benefits
that
you
get.
D
Kubernetes,
in
fact,
we
can
see
that
on
the
github
action
itself
in
the
deploy
step,
we
will
see
the
external
ip
where
the
service
was
deployed
and
let
us
open
this.
D
D
D
Look
at
this
change:
we
will
see
that
it.
It
failed
at
the
point
where
the
cloud
build
was
called
so
cloud
build,
failed
and
again
it
failed
because
of
failed
test
cases
right.
D
We
had
a
failing
test
case,
so
this
is
an
example
of
how
your
development
pipelines
can
be
built,
and
all
I'm
doing
is
focusing
on
the
code
focusing
on
the
tests
and
don't
have
to
do
anything
with
respect
to
operations
with
respect
to
running
the
test,
automation,
deployment
and
so
on,
and
this
is
the
power
of
github
actions
and
using
google
cloud
as
well
to
set
up
the
workflows
that
you
would
like
to
have.
I
will
cut
back
to
the
slides
now.
D
Awesome
so,
in
summary,
I
would
just
like
to
leave
you
with
a
few
key
takeaways
we
spoke
about
this
before,
but
to
reiterate,
use
a
high
quality,
robust
source
code
management
system,
adopt
containers
to
build
images
that
are
consistent,
all
the
way
from
development
to
test
environments,
to
production
and
management,
test,
test,
test
test
and
write
more
and
more
tests.
This
is
super
important
automate
everything
that
you
can
and
build
ci
cd
pipelines
to
make
your
job
easier
as
a
developer.
D
And,
finally,
a
couple
of
additional
things:
try
to
find
a
build
system
where
you
can
save
your
build
configurations
along
with
your
source
code.
So
this
way
you
know
when
something
breaks
that
it
is
your
source
breaking
or
is
it
your
build
configuration
that's
breaking.
This
will
also
help
you
actually
build
these
automations.
D
And
finally,
leverage
some
of
the
capabilities
in
the
platforms
that
are
using,
for
example,
the
artifact
registry,
which
automatically
scans
for
vulnerabilities
that
I
showed
so
with
that
I
come
to
the
end
of
this
demo.
End
of
this
talk.
I
hope
I've
left
you
with
something
in
terms
of
how
you
can
build
your
cicd
pipelines,
how
you
can
use
github
actions
to
automate
your
workflows
and
if
you
have
further
questions,
please
connect
with
me
offline
on
social
media
I'll,
be
glad
to
help
you
and
with
that.
Thank
you
for
your
time.
D
Thank
you
for
this
talk
for
spending
time
with
me
on
the
stock
and
hope
you
had
some
good
takeaways.
Thank
you.
B
Wow,
thank
you.
Prashant,
thanks
for
educating
us
on
like
how
you
can
use
cicd
with
github
actions
and
google
platform
right.
That's
a
very
good
one,
but
another
key
takeaway
that
I
want
developers
to
consider
is
like
the
power
of
choice.
I
think
our
leaders
have
reiterated
many
a
times
that
developers
are
the
center
right
developers
are
the
boss
right
so
developer.
B
First,
even
in
this
demo,
if
you
imagine,
like
you,
always
have
the
choice
of
github
ci
gita
being
used
one
for
only
for
repository
management
or
using
a
combination
of
github
plus
google
cloud
or
any
anything,
any
other
providers
right,
you
always
have
the
option.
That's
the
power
of
choice.
Right,
please
do
consider.
I
heard
that
there
are
some
tweets
coming
our
way.
Richard
are
you
ready.
B
All
right,
ashish
chavla
india,
is
leading
the
fintech
revolution
with
a
huge
digital
ecosystem
he's
quoting
nandini
nilkani
at
github
constellation.
Thank
you.
Thank
you
for
that.
B
C
C
You
know
that
open
source
has
won
and
really
it's
a
very
interconnected
world.
Today,
we
all,
as
developers
are
even
further
empowered
because
we
can
depend
on
what
each
of
us
is
building
and
we
can
build
on
top
of
it,
and
so
it's
really
great
to
see.
You
know
that
open
source
culture
thriving.
Thank
you
again.
Carrie.
E
C
B
B
One
more
quote:
what
does
the
ceo
of
github
and
high
I
have
in
common
wow?
What's
that
passion.
B
A
B
Thank
you,
please
keep
the
tweets
flowing
right
and
do
remember
that
you
know
the
tweets.
C
It's
you
know,
it's
amazing,
to
see
how
the
world
has
become
so
connected
and
and
the
role
that
social
media
platforms
are
playing.
You
know,
sometimes
I
feel
very
old.
You
know
because
I
feel
like
I'm
still
still
getting
used
to
that.
While
the
next
generation
has,
you
know,
learned
how
to
use
it
to
their
advantage
and
talking
of
the
next
generation-
and
you
know
in
the
context
of
the
last
few
years,
with
the
pandemic
dewakar,
how
have
your
children
been
doing
with
online
schooling.
B
I
have
two
naughty
boys
and
this
pandemic
has
been
awesome
right
in
a
way
because,
if
you
think
about,
if
you
ignore
the
negatives
of
it,
of
course
it
did
bring
in
closeness
among
the
family
members.
We
got
to
spend
more
time
with
the
kids
and,
if
you
think
about
the
tech
part
of
it,
my
younger
one
is
using
microsoft
teams
and
he
keeps
checking
with
me
on
like
how
I'm
using
teams
and
how
his
teams
is
better
than
mine,
because
he
gets
to
submit
us
through
teams,
and
I
don't
get
to
right.
B
And-
and
they
also
ask
you
know
like
how
many
sparkles
I
got
in
slack
and
once
they
even
keep
count
of
it
right,
you
know
280
290,
whatever,
for
those
who
don't
know,
sparkles
are
there's
a
friendly
way
of
acknowledging
someone
for
their
contribution
right.
B
It's
like
a
kudos
like
it's
a
pat
on
the
back
right,
so
they
we
have
in-gate
up
like
the
sparkles
and
my
kid
to
keep
stab
of
the
counts
and
once
like,
he
shouted
from
my
workspace
all
the
way
to
the
living
room
where
his
mom
was
there
saying
that
you
know
daddy
did
not
do
any
good
job
today,
no
sparkles,
anything,
it's
all
fun,
fun,
fun,
yeah!
So,
what's
up
with
you
richard.
C
Yeah,
so
I
remember
you
mentioning
that
you
folks
had
started,
you
know
using
duolingo
for
learning
some
languages
and
and
and
I
love
duolingo
too,
you
know
I've
been
using
it
to
learn
spanish.
You
know
my
kids
are
learning
it
and
I'm
like
I'm
not
going
to.
Let
you
have
your
see,
you
know
secret
language,
I'm
just
kidding
it's
it's
a
lovely
language
and
it
seems
so
easy
to
learn.
Perhaps
duolingo
makes
it
easy
and
I
believe
duolingo
is
on
github.
F
C
Exactly
and
so
talking
about
these
past
few
years-
and
you
know
yeah,
while
it
has
had
its
own
challenges
online
education,
I
feel
like
it's
opened
up
the
doors
in
many
ways
as
well
right,
making
it
a
lot
more
accessible.
C
Like
I,
you
know,
children
are
learning
things
from
teachers
who
are
not
in
the
same
location
like
even
music
and
learning
instruments.
It's
quite
amazing
and
something
else
that
I've
noticed
is
that
you
know,
even
though
my
kids
may
be
spending
a
lot
of
time
playing
video
games.
You
know
that's
another
challenge,
but
they're
becoming
aware
of
a
lot
of
things.
You
know
what
is
security.
You
know
they
they're
also
learning
to
learn.
You
know
becoming
appreciating
what
it
effort
it
takes
to
build
and
run.
C
You
know
software
at
scale
and
reliably
they.
They
were
complaining
the
other
day
when
their
favorite
video
game
platform
was
down
and-
and
I
think
they
were
easily
able
to
connect.
You
know
when
my
team
or
I
are
heads
down
fighting
some
fires
in
our
own
service,
so
people
learn
when
they
are
immersed
into
a
situation
and
when
they
experience
problems
firsthand
so-
and
I
think
github
sees
this
as
an
opportunity
and
a
responsibility
to
help
the
next
generation
by
giving
them
some
great
tools.
And
you
know,
programs
to
guide
their
learning.
C
C
So,
coming
back
to
today,
devakar
have
you
heard
the
term
observability.
B
C
E
Hello,
everyone
welcome
to
github
constellation,
and
today
we
will
be
doing
a
panel
discussion
on
observability
in
devops
and
open
source
and
we
are
joined
with
two
of
the
panelists
from
the
observability
community
itself.
So
hello
panayan,
when
I
do
want
to
introduce.
G
Yeah
sure,
hey
guys,
I
am
pranay,
I
am
one
of
the
co-founders
at
cygnus.
Cigras
is
an
open
source
observatory
platform.
We
help
developers,
monitor
applications
and
troubleshoot
any
problems.
G
Basically
an
engineer
background
accidentally
got
into
auxiliary
domain
and
enjoying
it
a
lot.
So,
looking
forward
to
the
discussion,
hey.
E
H
Hello,
everyone,
I'm
arvid,
and
I
work
at
elastic
as
a
developer
advocate
so
yeah,
pretty
similar
things,
observability
devops
and
also
a
bit
of
search
yeah
excited
to
be
here
and
probably
talk
to
you
all
more
more
about
observability
yeah.
Thank.
E
You
and
for
all
of
you
out
there,
I'm
josh,
I
work
at
principal
as
a
product
manager,
and
I
specifically
I'm
co-creator
and
one
of
the
maintainers
of
hypertrophy,
which
is
our
open
observability
platform,
so
have
been
an
observability
since
last
couple
of
years
and
exploring
some
of
the
interesting
things.
So
we
will
talk
about
many
of
them
today
as
well,
and
we
will
have
some
of
the
interesting
opinions
questions
answers.
E
I
guess
so,
starting
with
the
very
first
one
and
interesting
one,
what
is
observability
and
when
I,
what
is
observability
for
you.
G
Yeah,
so
that's
an
interesting
one,
so
I
am
an
electrical
engineer
by
by
training
right
and
in
electric
engineering,
I
always
deal
with
systems
and
how
you
can
understand.
What's
going
on
there
right
so
to
me,
observability
is
figuring
out,
what's
going
on
within
a
system
through
monitoring
the
external
outputs
right,
and
so
the
authority
broadly
as
such,
is
independent
of
software.
G
I
think
objective
is
figuring
out
like
what's
going
on
within
your
software
stack
within
your
servers
with
your
infrastack
by
monitoring,
metrics
locks
and
traces
right,
so
I
have
sort
of
a
weak
definition
of
auxiliary,
so
anything
which
helps
you
understand
how
your
system
works,
what
its
bottlenecks
are,
where
it's
possibly
failing
and
and
potentially
how
you
can
like
solve.
It
is
ability
for
me.
E
Definitely
and
like
that's,
a
pretty
interesting
and
well
articulated
way
to
put
it
right.
So,
as
I
have
been
going
through
the
definitions
and
understanding
for
me
like
what
observability
is,
I
I
tried
to
articulate
it
like
the
capability,
which
just
enables
you
to
ask
ask
questions
about
your
system.
Then,
when
we
talk
about
metrics
log
spaces,
which
mostly
people
talk
about
while
talking
about
observability,
it's
more
of
a
way
to
fuel
those
answers,
so
it's
a
way
to
get
the
data
which
can
help
you
answer
those
questions
about
the
system.
E
But
yes,
as
you
pointed
out,
it's
more
of
a
it's
a
capability
which
just
lets
you
answer
lets
you
ask
questions
to
the
system
and
get
answers
whenever
you
want
solving
what
is
again
observability
for
you.
H
So,
to
be
honest,
like
I,
I
tend
to
agree
what
brenda
was
saying
and
then
I
have
heard
this
term
like
you
know,
maybe
three
or
four
years
back
when
it's
getting
popularized,
but
it's
a
general
concept
and
I've
always
always
heard
about
you
know
the
monitoring
aspect
of
infrastructure.
And
how
do
you
monitor
things
profiling
coming
from
java
background
and
then
like
people
started
to
add
more
like
logs
metrics
traces,
which
is
all.
A
H
But
when
you
put
together
and
you
will
you
will
start
to
realize
that
the
pitfalls
of
what's
happening
when
there
is
an
issue,
so
that
way,
I
think
observability
is
like
getting
to
know
your
infrastructure
from
a
360
degree
point
of
view.
It
might
be
a
single
data
point
that
you
would
you
could
take
and,
like
you
know,
try
to
cross
reference
or
check
or,
like
you
know,
get
to
a
rca.
H
H
As
a
standard
is
evolving
and
then
lot
many
data
points
or
things
are
being
added
to
it.
So
that
way,
I
think
it's
ever
evolving
as
a
definition,
and
it's
going
to
like
you
know,
add
more
and
more
tools
in
the
hands
of
developers
as
we
go
through.
E
Yeah
and
definitely
it's
not
just
about
application
and,
as
you
likely
pointed
out,
it
starts
with
your
load
balancer.
It
boasts
to
your
back
end.
Whatever
happens
in
between
and
whatever
happens
on,
the
both
ends,
everything
keeping
track
of
everything.
Maintaining
that
context
and
the
correlations
it
all
helps
in
the
greater
observability.
E
One
of
the
thought
I
came
across
recently
was
systems
are
by
default,
opaque
in
nature
and
observability
is
something
which
helps
you
compare
the
systems
as
imagined
and
the
systems
which
are
observed
existing
chance.
We
observe
them
so
going
towards
our
next
one
as
we're
talking
about.
We
all
have
been
working
around
observability
since
some
time
now,
and
mostly
we
worked
with
few
customers.
Few
came
across
huge
cases
during
the
art
annual
right.
So
what
are
some
of
the
interesting
use
cases
or
customer
needs?
You
came
across
while
working
around
observability.
A
H
The
lines
between
the
observability
I
mean
the
when,
when
you
start
talking
about
observability,
usually
you
tend
to
think
about
the
devops
practitioners,
maybe
developers
who
are
like
you
know,
building
and
trying
to
work
with
the
ci
cd
system.
All
of
that
right,
but
I
also
start
to
see
a
lot
of
secops.
People
also
try
to
use
the
same
data
and
for
their
analysis
and
trying
to
figure
out
things.
So
I
think
the
observability
and
the
security
area
is
merging
and
a
lot.
H
Many
companies
are
really
taking
this
data
like
really
taking
this
trend
towards
making
it
easier
for
a
lot
of
security
practitioners
infosec
folks
to
like
you,
know,
easily
build
and
analyze
data.
So
as
we
become
more
and
more
digitally
transformed
in
the
terms
of
like
each
and
every
process,
I
think
this
would
be
interesting
trend
to
know.
Maybe
security
will
become
more
focused.
So
then,
then,
what
we
were
talking
about,
usually
about
observability,
so
yeah.
So
that's
what
I'm
really
excited
about
and
see.
There's
a
trend.
E
Definitely
and
as
I
have
been
working
on
that
intersection
of
security
and
observability
since
three
years,
it's
very
interesting
to
see
that
viewpoint
right
and
even
one
of
the
payments
company
we
were
working
with
mostly
for
observability
part
of
things.
They
eventually
you
started
using
the
same
data
for
detecting
ddos
and
the
other
other
attack
types
as
well,
so
it
and
using
this
exact
same
instrumentation.
E
So
it's
kind
of
helping
towards
the
security
aspect
of
things
and
vi
principle
have
also
been
working
around
some
of
the
use
cases
specifically
around
security,
and
one
of
the
interesting
use
case
apart
from
security
I
came
across
recently
was
for
like
business
metrics,
so
one
of
the
streaming
platform
you
are
working
with
and
they
have
a
payment
gateway.
Obviously
the
payment
gateway
has
different
payment
providers
and
they
want
to
know
if
some
payment
provider
fails.
E
E
That
becomes
a
very
good
point
where
you
have
this
data,
where
you
have
this
metrics
traces
which
are
coming
and
giving
you
all
this
context,
and
you
can
use
it
for
different
business
use
cases
as
well,
and
that's
one
of
the
part
where
I
see
this
data
is
being
used
and
the
business
use
cases
which
are
being
addressed
using
the
observability
part
of
things.
So
can
I?
What
are
the
interesting
things
you
came
across.
G
The
domain
itself
evolve
a
lot
one
of
the
things
which
I'm
noticing
is
observability
is
sort
of
shifting
left
and
then
like
more
and
more
developers
like
individual
developers
are
really
looking
to
understand
like
how
their
systems
are
working
right.
So,
for
example,
some
of
our
users,
they
are
using
signals
to
monitor
their
front-end
applications.
Also
right,
like,
though,
like
traditionally,
we
think
observability
to
be
only
focus
on
back
end.
G
But
people
now
want
to
see
that
hey
like
how
our
trace
goes
from
front
end
to
load
balancer
to
back
and
right
so
growing.
I
think
there's
a
as
software
systems
are
getting
more
complex
right.
We
are
seeing
that
more
and
more
people
are
interested
in
learning
how
these
are
behaving,
and
I
think
it's
also
important
that
they
understand
how
things
are
going
right.
I
think
the
second
thing
which
we
are
recently
seeing
is
that
people
are
becoming
a
lot
more
privacy
sensitive
in
the
sense
that
they
want
to
understand
like
hey.
G
Where
is
the
objective
data
stored
is
which
servers
are
going
to
write?
For
example,
one
of
the
users
who
was
using
cygnus,
they
are
fintech
company
based
in
india
and
rbi,
recently
had
a
regulation
that
hey
like.
If
you
are
a
fintech
company,
you
can't
send
pii
data
outsides,
like
in
outside
cell.
What's
right,
something
like
that.
So
that
sort
of
push
that
hey
like
if
it's
a
auxiliary
data
is
important.
And
then
we
can't
store
lots
of
this
in,
like
like.
G
We
need
to
have
control
over
where
this
data
is
stored
is
like
one
of
the
things
which
you
are
seeing
and
which
is
pretty
interesting
to
me,
and
I
think
this
trend
will
sort
of
go
like
more
become
more
important
because,
like
countries
become
more
sensitive
about
their
data
right-
and
these
are
like
the
couple
of
things
and
like
what
you're
seeing
is
like
more
broadly,
what
you're
saying
there's
like
a
whole
influx
of
new
companies
who
are
investing
more
in
setting
their
observability.
G
Well,
for
example,
they're
like
kms
company,
which
which
use
signals
and
like
because
they
have
very
high
sort
of
throughput,
which
they
want
to
monitor
right
and
that's
why
they
sort
of
go
for
an
open
source
product
like
ours,
because,
like
otherwise
like
sas
vendors,
have
become
very
costly
and
they
can
sort
of
control.
G
How
much
they
like.
What
type
of
servers
they
want
to
run
in.
They
don't
have
to
suffer
latencies,
etc.
By
sending
data
to
cloud
right
so
yeah
so
broadly,
seeing
lots
like
obviously
being
used
by
a
lot
more
type
of
users
and
then
also
sort
of
every
part
of
the
organization
like
from
developers.
Even
front-end
developers
are
now
interested
in
learning
about
observation.
G
Yeah,
so
our
view
is
that
because
the
observability
is
like
observative,
products
are
being
used
by
developers
right,
so
our
review
fundamentally,
is
that
if,
if
like,
the
developers
are
your
end,
users,
open
source
just
makes
more
sense
right,
because
if
I'm
a
developer
given
the
quality
of
the
product
is
like
comparable
many
times,
people
like
earlier
used
to
be
that,
like
open
source
products
were
not
as
good
as
like
sort
of
like
close
source
products,
but
now
we
are
seeing
that
open
source
product
quality
is
becoming
more
important.
G
So
to
us
it
seems
that,
like
if
you're
a
developer-
and
you
have
to
choose
a
product
right
you
by
by
by
default,
you
should
choose
open
source
right,
because
you
can
see
the
code,
you
can
integrate
it
in
your
applications
more
closely.
G
You
can
sort
of
more
tightly
customize
with
your
ci
cd
pipelines,
etc
right
and
also
be
part
of
the
community
right.
So
you
just
so
it's
not
just
about
using
a
particular
product.
If
you,
if
you
find
something
great,
you
can,
and
you
try
to
fix
that
in
your
system,
you
can
contribute
back
right.
G
So,
fundamentally,
as
the
way
we
think
about
it
is
that,
like
products
which
are
used
by
developers,
devops
engineers
etc,
would
would
see
a
huge
jump
on
on
users
of
open
source
products
right,
like
people
will
by
default,
go
to
that
rather
than
just
using
closed
source,
vendors
and
and
the
landscape
as
a
whole
is
like
maturing
very
rapidly
I'd,
say
they're,
like
lots,
lots
of
great
products
in
the
domain
which
people
can
now
choose
from.
G
I
think
all
three
of
us
represent
products
which
people
can
try
and
see
what
fits
well
for
them
right
and-
and
I
would
say,
the
quality
of
the
products
available
in
open
source
is
pretty
great,
especially
in
in
our
domain.
I
think
open
telemetry
has
done
a
great
job
of
sort
of
commoditizing,
the
instrumentation
layer.
G
G
How
can
you
can
efficiently
store
data
and,
like
that's
the
sort
of
approach
we
are
taking
at
signals
that
hey
like
open,
telemetry
sort
of
instrument,
solving
the
instrumentation
layer
but
like?
How
do
we
add
more
value
to
the
user
through
like
building
a
great
front
and
building
a
great
back
end
and
making
that
as
simple
as
like
a
sas
product
like
which
is
out
there
right,
so
yeah
so
broadly,
very
bullish
about
open
source
landscape
in
this
domain?
G
C
E
There
are
a
lot
of,
as
you
mentioned,
a
lot
of
open
source,
observability
products
which
are
very
mature
these
days
and
specifically
on
open
telemetry,
the
democratization
of
data
collection
and
the
simplicity
it
brings
in
general
and
the
vendor
neutral
way
of
doing
it.
It
definitely
helps
people
to
be
to
easily
instrument
the
applications
and
also
don't
have
to
worry
about.
E
They
might
change
the
vendor
one
day
and
then
or
a
product.
They
are
using
one
day
and
they
won't
be
able
to
get
as
much
of
the
value
out
of
the
previous
instrument.
Instrumentation.
G
Yeah,
I
think
it's
just
a
higher
level
of
trust
right
that
hey,
like
as
a
developer,
you
want
to
know
like
what's
running
in
your
infrared
like
this
is
something
which
very
many
of
our
users
tell
us
that
hey
like
I
want
to,
because
these
are
also
developers
right
like
they.
They
know
like
what
code
is
and
what
and
they
want
to
learn
like
what's
running
they
don't
want
to
be
engulfed
in
a
black
box
thing
right
and
that's.
G
Why
that's
why
it's
very
important
to
get
that
trust
that
hey
this
is
the
product
I
can
see.
I
can
see
what
is
inside
it
if
the
quality
of
the
code
looks
good,
I
can
use
it.
I
can
actually
even
fix
it
if
there
is
some
issue
in
the
code
right,
so
I
think
that
sort
of
being
open
being
able
to
provide
that
sense
of
trust
is
very
important.
I
think
and
that's
why
we
are
sort
of
seeing
like
a
swell
of
open
source
adoption,
and
I
think
this
is
the
standard
to
continue.
E
Definitely
solvent,
what's
your
take
on
the
whole
evolution
of
open
source,
observability
landscape.
H
I
think
it's
it's
exciting.
A
lot
of
projects
have
come
in,
like
you
know,
in
less
time,
open
telemetry
with
with
the
you
know,
the
the
merging
of
all
these
open
sensors
and
the
the
total
project
itself
is
a
great
standard
to
follow
on,
and
I
believe
that,
if
it
really
works
out,
it's
going
to
bring
in
a
lot
of
change
in
what
everyone
is
trying
to
do
at
this
point
of
time.
I
think
there
are.
H
Sas
observability
companies
and
eventually
there
will
be
a
lot
of
enterprise
adoption
of
all
of
these,
and
then
I
think,
especially
in
the
open
source
area,
that
that
is
going
to
be
interesting
from
one
point
of
view
from
a
standard
point
of
view,
and
then
there
are
these
discrete
popular
data
stores
right,
you
know
the
prometheus,
and
then
you
have
this
like
the
victoria
metrics,
there
are
different,
like
click
house
again,
so
each
one
of
these,
like
are
kind
of
like
serving
as
a
stores
for
a
lot
of
people
to
build
customized
solutions.
H
On
top
of
it,
I
have
seen
a
couple
of
dbs
people
built
event
stores
using
a
solution
like
putting
it
together
as
a
solution
that
that
itself
is
really
great.
I
guess
we
also
have
one
more
one
more
popular
project
from
linkedin.
I
really
forgot
the
name,
but
then
I
think
that's
interesting
as
well,
so
there.
A
H
Many
of
these
open
source
projects
which
are
not
exactly
like
you,
know
the
straight
in
replacement
for
any
other
database,
but
then
together
each
one
of
these
from
open
source
modules
would
make
a
great
db
or
great
solution
for
observability,
and
I
think
that's
where
we
are
heading
from
the
open
source
point
of
view
as
well.
E
Absolutely,
and
as
you
rightly
mentioned,
it's
not,
there
are
so
many
problems
around
this
place.
It's
a
hard
problems,
easy
problem,
but
there
are
so
many
problems
in
place.
Even
starting
with
the
metric
store
or
the
trade
store.
The
storage
becomes
one
of
the
important
problems.
Then
the
analytics
becomes
one
of
the
probability,
but
visualization
becomes
one
of
the
important
problems.
So
there
are
so
many
interesting
products
which
are
coming
in
people
are
coming
into
picture
which
are
solving
these
problems
inside
those.
H
H
I
think
in
recent
times
they
are
also
open
sourcing,
these
some
of
these
data
stores,
whatever
the
models
that
they
are
following,
which
is
also
very
nice
in
a
way
because
they
it's
not
like
they're
having
a
columnar
store
internally,
and
then
they
are
keeping
it
proprietary,
but
from
the
observability
standpoint
and
open
source
standpoint
like
it's
good
to
have
more
in
the
open
and
transparent
with
whatever
license
they
may
be,
but
then
that
that
is
really
nice
to
see.
In
my
opinion,.
E
H
Of
projects
and
then,
like
you,
know,
the
prometheus
and
then
open
telemetry
are
good
starts
like
even
though
they
are
big.
But
then
you
will
find
a
lot
of
tangents
to
work
in,
like
you
know,
you
could
go
and
work
on
the
java
side.
You
could
work,
go
and
work
on
the
rust
side,
which
is
like
probably
a
bit
more.
Contributions
are
probably
needed
there.
So
there.
H
That
you
could
build
parallelly
if
you
want
to
take
a
look
at
there
are
see,
I
think
like
every
week
or
every
every
two
by
every,
like
you
know,
quarter
at
least
I
see
one
or
two
interesting
projects.
For
example,
one
that
I
just
pointed
out
is
the
polar
signals,
one
where
they
have
open
source
their
approach
of,
like
you
know
how
they
want
to
store
the
events
or
data
in
a
columnar
store.
H
So
that
is
a
good
one
to
look
at
as
well,
but
I
think
foundation-led
projects
will
give
give
you
greater
access
to
every
in
the
network
from
the
career
point
of
view
and
also
make
it
interesting
for
you
to
work
with
other,
like
you
know,
foundation
projects
like
kubernetes
and
etc.
So
I
think
some
of
those
projects
are
what
I
I
would
like
to
know,
but
in
the
interest
of
everyone
in
the
panel
I'll
also
tell
that
signals
is
an
open
source
project.
H
Then
you
you
might
want
to
look
at
and
hyphen
trace
is
also
one
more
so
each
one
of
them
are
interesting.
If
you,
if
you
just
want
to
learn
or
take
a
look
at
them
as
well.
G
I
think
arvind
had
a
good
list.
I
think
one
project
which
I
am
sort
of
following
is
celium,
so
this
is
a
epbf
based
project
which
is
trying
to
not
specifically
sell
solve
objective,
but
they
are
into
like
monitoring,
networking
and
authority
also
as
well.
G
The
key
thing
it
does
is
that
it
it
enables
you
to
get
metrics
from
the
linux
kernel
there
very
easily
so,
rather
than
you
trying
to
instrument
every
all
your
applications
with
like
a
bit
of
code
and
then
getting
the
data,
you
can
sort
of
enable
certain
flags
and
then
start
getting
those
data
right.
G
So
I
think
that's
one
of
the
projects
which
I'm
keenly
looking
in
and
like
we'll
see
like
how
we
can
think
about
adding
that
to
signature
like
some
parts
of
that
to
signals
but
yeah
like
very
excited
about
like
different
db's,
which
are
coming
about
which
are
like
specifically
for
observability,
and
I
think
that's
like
these
are
like
great
project.
We
should
be
looking
into
and
like
learn
more
about.
E
E
Yes,
so
people
who
want
to
check
it
out
definitely
go
ahead
and
check
out
the
ebpm
thing
as
well:
it's
it's
evolving
pretty
quickly
and
it's
getting
getting
whole
thing
in
the
very
right
direction.
I
guess
so.
Finally,
like
as
we
are
coming
towards
the
end
of
this
discussion,
what
would
you
suggest
that
companies
who
are
just
getting
started
observability
should
start
with
or
get
started
in
the
initial
phases.
G
Yeah,
so
maybe
I
can
go
ahead
first,
I
think
one
things
which
we
are
like
sort
of
recommend
all
our
users,
like
anybody
who
sort
of
asks
for
advice
on
obviously,
is
to
look
deeply
into
open
telemetry,
because
we
are
seeing
like
we
are
based
on
open
telemetry
for
the
instrumentation
layer
and
we're
seeing
that
the
the
project
is
evolving
rapidly
so
and
it
because
it's
an
open
standard
like
that
sort
of
enables
you
to
plug
in
any
back
end
in
the
end
right.
G
So,
for
example,
you
can
start
with
signals.
But
if
you
don't
like
us,
you
can
do
elastic
or
any
other
sort
of
tool
right.
So
I
think,
if
you're
starting
today,
thinking
about
authority,
look
deeply
into
open
telemetry,
look
what
it
can
do
for
you,
and
I
I
think
mostly
like
70
to
80
of
the
stack,
is
like
fairly
mature
now.
So
you
should
be
able
to
get
a
good
handle
on
that
and
so,
rather
than
so.
G
That
sort
of
makes
you
future
proof
right
so,
rather
than
you
getting
locked
in
a
particular
vendor
and
then
like
being
stuck
with
their
instrumentation
code
in
your
database
and
then
or
in
your
applications
and
then
not
being
able
to
get
out
because,
like
you
have
incurred
so
much
like
development
debt.
On
top
of
it
start
with
an
open
standard,
see
where
it
may
not
be
like
perfect
today,
but
it's
maturing
rapidly.
And
then
you
have
the
flexibility
of
like
switching
vendors.
G
If
you
want
and
just
being
able
to
see
the
code
right,
so
that's
one
and
I
think
second
would
be
to
evaluate
if
open
source
projects
which
are
coming
up
are
able
to
serve
your
needs
because,
as
I
mentioned
earlier,
also
right
that,
if
you're
a
developer,
if
you
have
an
open
source
project
which
is
of
the
same
quality
right-
and
you
know
that
has
been
the
concern
that,
like
the
open
source
projects,
were
doing
a
piecewise
job.
G
But
if
you're
able
to
get
something
which
does
what
you
want
to
do,
I
would
suggest
to
start
with
that,
even
though
it
might
be
like
a
bit
of
more
effort
in
the
beginning.
But
I
guess
it
would
pay
out
in
the
long
term
and
you'd
also
like
help
grow
the
open
source
project
and
like
contribute
back
right.
So
it
could
just
also
be
like
much
better
learning
for
your
developer.
So
I
think
those
are
two
things
start
with
open
telemetry
check.
E
Absolutely
I
usually
reverse
these
ids
and
start
with
first
understand
your
goals
and
what
you
want
to
do
with
the
whole
thing
and
what
you
want
to
understand
about
your
system
only
and
how
it
will
serve
eventually
go
to
start
small
start
with
the
started
open
standards
by
default
and
instrument.
One
server
see
what
insights
provides,
provides
you
and
eventually
go
start
taping
into
other
parts
of
the
scripts
as
well.
E
But
yes,
and
so
what
will
be
your
advice
for
people
who
are
starting
with
observability.
H
I
I
I
would
say
exactly
kind
of
advice
that
you
were
also
saying:
renee,
told
a
lot
about
tools
and
various
projects
that
are
there
and
how
you
could
look
forward
for
the
long
term
tco
a
lot
of
these
things,
but
then
I
would
recommend,
if
you
are
thinking
about
observability
monitoring
whatever
it
is
just
start
a
google
doc
or
like
a
paper,
and
if
you
are
the
product
owner
or
product
manager
or
a
developer
start
looking
at
what
are
the
top
10
metrics
that
I
really
want
to
know
and
how
is
my
data
currently
being
collected,
or
what
should
I
want
to
do
with
that?
H
Like?
Should
I
throw
away
that
and
start
fresh?
So
these
are
the
questions.
How
do
I
collect
and
then
what
data
should?
I
need
to
pick
them
up
sanitize?
What
what
is
the
process
like
and
then
what
are
the
top
10
metrics
that
I
want
to
monitor?
If
there
is
a
issue,
do
I
have
already
a
dashboard
or
alert
or
thing
to
look
at?
I
think
the
tools,
databases,
solutions,
technologies,
sas
platforms,
open
source
projects.
H
Everything
comes
second
because
once
the
business
is
in
problem
and
you
are
incurring
loss,
all
that
your
boss
wants
to
do
is
like
get
that
up
and
running
what
I
don't
know
what
it
takes.
Please
get
it
done,
and
at
that
point
of
time
you
will
not
have
time
to
look
after
all
of
that
and
I
believe
the
best
way
and
people
usually
to
the
ssh
and
then
trying
to
figure
out
things
by
themselves.
H
H
Right
so
I
think
that
is
what
I
would
recommend
any
developer
or
product
owner
to
look
at
figure
out
what
you
really
want
to
do,
and
there
is
also
saying,
like
you
know,
there
is
something
called
mean
time
to
detect
and
mean
time
to
react.
So
you
you
need
to
focus
on
like
how
do
I
get
that
mean
time
to
react?
A
bit
better
and
faster
sres
know
this
very
well
so
yeah.
That's
that's
my
advice.
E
Thanks
ivan-
and
that
was
a
very
good
articulation
of
everything
I
guess
and
that
that
was
it
for
mine-
that
that,
that's
all
that's
all
questions.
I
had
around
observability
open
source
and
the
community
aspect
of
things,
and
I
definitely
think
there's
a
great
community
support
backing
the
good
projects
around
observability
in
this
age,
and
there
are
so
many
amazing
people
who
are
working
together
to
solve
this
problem
and
making
systems
observable
every
day.
E
So
I
think
it's
a
great
state
to
be
in
and
we
will
see
how
the
future
evolves
and
what
different
use
cases,
how
the
what
different
applications
we
will
see
in
the
coming
days
as
well.
E
So
thanks
brendan
erwin
for
joining
in
and
looking
forward
to
having
the
discussion
even
on
even
the
wider
note
in
the
coming
years,
and
we
will
see
how
that
how
this
whole
evolution
looks
like.
Thank
you.
B
Wow,
that's
a
wonderful
panel
discussion.
I'm
really
glad
that
these
companies
are
working
on
this
observable
observability
concept,
boy,
that's
a
mouthful
of
a
word
for
me.
It
reminds
me
of
the
good
old
developer
days
when
we
had
to
struggle
for
crisis
management.
Rightly
we
have
tons
and
tons
of
log
files
from
desperate
systems
and
like
we
are
to
tie
it
back
together
to
make
sense
of
it.
Oh
my
god
like
it
sounds
as
if,
like
it's
it's
too
good
for
me
to
believe.
B
Maybe
I
need
to
reset
something
on
it,
but
it's
too
good
to
believe,
and
if
this
thing
works,
this
will
actually
make
the
developer
life
easier
and,
like
anyone
who
is
part
of
the
sdlc
right,
it
looks
like
even
businesses
can
make
sense
of
the
usage
that
that's
very
nice.
I
was
trying
to
research
more
on
it
in
parallel
because
there
was
no
demo
in
this
panel.
B
Discussion
like
it
looks
like
cygnus.io
one
of
the
panel
members
website
like
had
a
quick
demo
of
like
how
things
work,
so
maybe
people
can
go
check
it
out
and
one
more
thing.
I
don't
know
if
you
noted
richard,
like
for
a
nice
code,
increased
trust
with
open
source
right.
B
That's
amazing
right,
like
if
someone
told
me
about
this
statement
like
a
decade
ago
or
something
I
don't
know
like
whether
I'll
be
laughing
or
whether
I'll
be
wondering,
but
things
have
definitely
changed
when
it
comes
to
open
source
like
now.
People
are
thinking
that
you
know
if
it's
open.
It
is
trustworthy.
Amazing.
That
again
reminds
me
of
the
statement
that
you
quoted
earlier.
Open
source
has
one
right.
On
a
lighter
note,
I
was
on
a
light
note.
B
I
was
wondering
like
arvind,
like
he's
lo,
all
locked
up
and
he's
like
all
bolted,
and
I
was
wondering
like
who
he's
hiding
from
maybe
I'll
have
a
private
chat
with
him.
Like
I
don't
know
if
it's
the
pets
or
the
kids,
thankfully
I
am
nicely
hiding
in
an
office
space.
C
B
C
So
yeah
coming
back
to
our
sessions,
so
in
the
next
session
we
are
going
to
hear
from
kanika
pastrica,
who
is
a
program
manager
at
microsoft
and
she
will
be
covering
the
ydc
protocol,
its
relevance
to
security
and
how
we
can
leverage
it
to
access,
azure
resources
from
github
actions
and
don't
worry,
even
even
if
you're
not
very
well
acquainted
with
github
actions
or
authentication
and
authorization
protocols.
A
Hi
everyone,
I
hope,
you're
all
doing
well
and
having
a
wonderful
time
at
github
constellation.
Let
me
take
a
few
seconds
to
introduce
myself
first.
My
name
is
kanika
pasadeja
and
I'm
working
as
a
program
manager
in
microsoft
and
I'm
really
happy
to
say
that
I'm
an
ex
hubble.
So
basically,
I've
been
proud,
happy
and
very
wonderful
journey
to
being
a
part
of
both
these
wonderful
organizations,
github
and
microsoft.
A
So
now
talking
about
microsoft,
we
think
about
azure,
and
yes,
you
guessed
it
right.
This
session
is
about
accessing
azure
resources
using
oidc
in
github
actions
need
not
worry.
I
know
a
lot
of
terms
in
the
title
as
your
ydc
github
actions,
but
there
is
no
hard
prerequisite
for
you
to
understand
the
session,
we'll
go
through
them
step
by
step
and
rest
assured,
you
will
be
able
to
have
some
good
knowledge
from
the
session.
A
So
let's
talk
about
the
first
term
in
the
topic:
access
what
is
access
whenever
you're
trying
to
access
anything
on
the
internet?
Maybe
your
instagram
reads
your
facebook
photos
or
anything
there
are
a
lot
of
things
involved.
Some
security
concerns
concerns
some
validations.
A
lot
of
protocols
into
the
picture,
so,
let's
try
to
understand
just
two
basic
terms,
which
often
lead
to
some
confusion.
A
So
we'll
start
the
session
by
understanding
some
basic
terms
and
yes,
as
you
can
see
in
the
title,
odc
github
actions,
and
then
we
will
actually
see
one
wonderful
scenario:
how
we
can
access
those
as
your
resources,
starting
by
coming
back
to
the
terms.
Those
two
terms
are
authentication
and
authorization.
A
What
these
terms
actually
mean
they
cannot
be
used
interchangeably.
Obviously
so
authentication
basically
answers
to
the
question:
who
are
you?
You
know
it
actually
checks
whenever
you're
trying
to
log
into
a
system?
Are
you
the
right
user
to
access
those
things,
so
that
is
authentication,
but
even
if
you're,
the
right
user,
do
you
have
permissions
to
do
the
activity
that
you're
trying
to
do?
That
is
what
we
call
as
authorization.
A
So
who
are
you
is
answered
by
authentication
and
are
you
allowed
to
do
that
is
answered
by
authorization?
So,
let's
take
a
very
common
example,
I
hope
you
like
to
travel
and
if
you
like
travel,
you
might
have
visited
the
airport
multiple
times.
What
happens
when
you
actually
visit
an
airport?
There
are
certain
security
checks.
Right,
not
just
internet
requires
security
check
a
lot
of
all
the
other
things
in
life
too.
A
So
when
you're
entering
the
airport,
there
are
certain
checks.
One
of
the
check
is
when
they
actually
see
your
id
to
see,
who
are
you
and
not
just
that
they
actually
match
it
with
your
face.
Sadly,
nowadays
we
have
to
take
our
mask
off
to
show
a
real
face,
but
yes,
that's
the
process,
so
they
actually
see
whether
you
are
the
right
person
who
is
entering
the
airport,
who
is
actually
booked
the
flight,
and
you
are
the
one
who
the
you
know.
A
Flight
has
been
booked
and
everything
else,
so
that
is
called
authentication
to
ensure
that
who
are
you
and
are
you
the
right
person
boarding
the
flight?
So
once
that
is
done,
you
are
an
issue
you
are
issued
on
boarding
pass.
So
after
your
luggage
gets
checked
in
and
things
slow
when
you're
entering
a
flight,
they
don't
check
your
face
again
right.
But
what,
if
you
enter
mumbai
flight
instructor
for
delhi
flight,
then
there's
some
twist
in
the
story.
A
So
that
is
why
we
need
to
ensure
that
you
have
the
rightful
permission
to
onboard
to
that
flight.
That
is
what
we
call
authorization
when
you
actually
show
your
boarding
pass
and
which
see
you
are
entering
the
right
flight.
You
have
those
permissions
to
do
that.
I
hope
next
time
you're
entering
the
airport,
this
authentication
and
authorization
sticks
with
you
and
not
just
for
flights,
but
for
understanding
all
security
protocols
of
internet
access.
A
This
is
about
understanding
the
basic
terms
which
we'll
use
today
and
now
and
in
the
title
we
have.
There
were
a
couple
of
more
terms:
github
actions
and
oibc.
Let's
go
through
them
one
by
one,
so
some
of
you
might
have
used
github
actions
before
some
of
you
might
something
new,
maybe
just
her
never
used
so
just
to
avoid
that
blocker.
For
you
to
understand
this
topic
I'll
just
give
a
very
brief
overview.
A
A
A
So,
let's
take
very
common
example,
you
know
you
maybe
just
push
your
code
into
the
repository,
and
now
you
want
certain
unit
tests
or
maybe
certain
integration
tests
to
run
the
event
will
be
whenever
you
push
that
code
and
then
there
can
be
an
action
or
a
script
that
can
run
be
on
the
different
runners.
So
by
runner
I
actually
mean
you
know
some
different
environments.
A
You
might
want
to
run
those
tests
on
windows,
os
or
maybe
linux
os
those
concepts
so
running
action
means
basically
calling
a
function
to
do
anything,
and
just
you
know,
as
basic
fundamentals
of
function.
We
pass
certain
arguments
right
here
also
in
the
action
there's
a
defined
signature,
whatever
parameters
it
requires
and
then
those
are
passed
with
the
action
call
and
those
steps
can
be
executed.
A
A
So
this
is
what
github
actions
are
next,
as
a
as
I
mentioned,
we'll
understand
what
oibc
is
so
I'm
just
covering
these
two
to
three
terms
in
the
beginning,
so
that
it
becomes
easy
for
you
in
the
session.
What
is
oidc
so
and
now?
I
know
you
you're
very
aware,
you're
very
much
aware
of
what
authorization
and
what
authentication
means
so
oauth
2.0
is
a
protocol
which,
actually
you
know,
lets
you
implement
the
authorization
part
while
accessing
anything
over
the
internet.
A
But
as
we
understood,
authorization
is
good,
but
not
good
enough.
So
we
have
a
thin
layer
which
is
called
oidc
open
id
connect
which
sits
on
top
of
oauth
2.0
protocol
and
enables
us
to
implement
authentication
along
with
authorization.
I'm
not
going
into
much
details
in
terms
of
security
policies
and
their
implementations,
but
don't
worry
I'll
definitely
share
the
good
resources
which
you
can
go
through
and
understand
these
concepts
in
detail
at
the
end.
A
A
A
So
let's
take
a
basic
example
and
before
going
to
the
flow
diagram,
let
me
just
tell
you
what
these
different
components
are
resource
owner
is
view
by
you.
I
mean
you
know
just
you,
you
know
you
can
own
different
things
on
the
internet.
Maybe
your
photos,
your
videos,
your
songs,
not
just
that
your
contacts,
your
friend
list
your
messages.
All
these
are
resources
which
actually
just
belong
to
you
and
then
there's
a
client
application,
video
that
I'm
considering
say,
maybe
any
application
as
we
talked
about.
Let's
take
an
example,
my
photo
app.
A
This
app,
I
say,
is
actually
you
know,
take
your
photos
and
creates
a
wonderful
video
for
you,
so
my
friend's
birthday
is
coming.
I
want
to
put
up
a
social
media
post
of
videos,
but
I
have
all
my
photos
in
facebook.
What
will
I
do?
Client
application?
My
photo
hack
needs
access
to
facebook,
but
I'm
not
comfortable
in
giving
my
facebook
credentials
to
my
photo
app
right,
so
I'll
use
login
via
facebook
and
give
them
facebook
will
act
as
my
authorization
server.
A
So
once
you
click
in
my
login
via
facebook
and
enter
your
facebook
credentials,
then
there's
an
exchange
of
token
that
happens
and
primarily
of
focus.
There
are
two
tokens
which
are
exchanged
id
token
and
access
token,
as
we
discussed.
Oidc
is
about
two
things:
authorization
and
authentication,
so
id
token
and
access
token
are
serving.
Those
two
needs
id
token
actually
ensures
the
authentication.
A
The
right
user
is
accessing
those
resources
and
authorization
is,
for
the
permission
part.
So
I
just
want
my
photo
app
to
take
my
photos
and
create
a
video
out
of
it,
but
I
don't
want
my
photo
app
to
post
on
my
behalf
or
maybe
send
friend
requests
to
someone.
No,
that's
not
the
permission
I
want
to
give
so
that
is
handling
the
authorization
part
for
which
is
controlled
by
the
exchange
of
access
token.
A
These
two
tokens
are
exchanged
and
we,
you
know,
can
safely
access
the
resources
well
over
the
internet,
and
I
hope
this
actually
gives
you
some
glimpse
of
what
is
ydc
flow,
what
are
the
security
concerns
and
how
things
are
handled
over
the
internet
next
time,
you're
actually
clicking
on
login
via
facebook?
You
know
what
is
happening
on
behind
how
your
things
are
insured.
I'm
pretty
sure
you
might
have
seen
that
screen
where
it
asks
you.
You
know
this
application
is
asking
permission
to
access
your
photos.
Do
you
I
do
you
agree?
Do
you
not?
A
So
this
is
about
understanding
the
odc
flow
and
I
hope
now
it's
started
to
build
some
picture
in
your
head.
So
far,
just
a
glimpse.
We
got
to
know
what
is
authentication
and
authorization,
the
airport
wizard
story.
Next,
we
understood
github
actions
when
the
developer
wants
to
push
something
certain
tests
has
to
be
done,
and
now
this
login
via
facebook,
which
you
actually
very
regularly
do
from
day
to
day
life.
So
next
now
try
to
understand
what
is
the
scenario
for
accessing
azure
resources?
Why
a
github?
Look
like
you
know?
A
Oh
not
many
times
we
keep
on
doing
this,
but
let
me
introduce
you
to
a
friend
of
mine,
so
this
friend
is
named
coco
hi
coco
lover
says
hello
world,
so
coco
is
a
developer.
Yours
and
focus
team
has
a
web
application
which
is
deployed
to
azure
and
that
application
can
be
accessed
via
some
credentials.
I
am
sure
you
might
also
have
some
of
those
azure
applications
for
you.
A
So
now
focus
team
keeps
working
on
that
application
and
you
know
they
want
the
application
to
be
up
high
and
running
every
time,
so
they
have
built
some
test
scenarios
and
they
want
to
test
set
up
our
github
workflow,
which
ensures
that
all
the
unit
tests
and
integration
tests
are
run
properly.
If
you
remember,
this
is
the
github
action
workflow.
I
took
the
same
example
when
you
push
a
code.
You
want
test
to
run
and
ensure
things
are
done
properly,
but
this
is
okay.
Poco
can
do
it,
how
let's
go
through
it?
A
So
coco
has
a
guitar
workflow.
As
I
mentioned,
there
is
certain
event
which
is
a
triggered
and
the
basis
of
that
certain
actions
are
expected
out
of
it.
So
now,
since
azure
isn't
what
there
are
some
access
restrictions
right,
github
has
a
different
workflow
platform
and
I
have
my
services
on
azure,
which
has
different
security
constraints.
A
So
if
you're
as
your
user,
you
might
know
that
something
called
app
registration
or
an
spm
which
actually
provides
us
very
important
credentials,
which
is
the
client
id
and
the
client
secret.
A
So
login
action
is
one
of
the
github
actions
provided
to
us
in
which
the
from
the
workflow
we
can
access
the
azure
resources.
So
just
going
by
the
terms
which
we
used
in
understanding
the
odc
there's
a
client
which
is
the
github
workflow
and
there's
the
authorization
and
the
resource
server,
which
is
your
because
azure
will
authenticate
azure
will
provide
the
right
resources
right.
So
now
I'll
just
hear
the
coco
can
use
the
login
action
pass.
These
credentials,
client
id
client
secret
id
and
subscription
id.
So
these
credentials.
A
I
hope
you
know
how
to
create
those
using
spn
or
you
can
actually
go
to
the
azure
portal
and
create
those
credentials
me
not
to
worry,
even
if
you
haven't
done
that
for
the
first
time.
I'll
definitely
share
that
resources.
The
step-by-step
guide
to
do
those
do
that
too.
A
Once
login
action
passes
these
credentials
just
like
any
application.
These
credentials
act
like
username
and
password,
and
we
say
that's
done.
Access
is
granted
and
next
the
github
workflow
can
access
the
services,
the
resources
and
test
strategy
or
anything
they
want
to
implement.
That
can
be
done.
So
definitely
in
our
scenario.
A
The
process
is
complete,
but
there
are
certain
concerns.
You
know
so
client
id
is
there.
Client
secret
is
there
which
are
being.
If
you
remember,
four
things:
client
id,
client,
secret
talent,
id
and
subscription
id
sign
in
alien
subscription
id
belong
to
your
azure
account,
but
client
id
client
secret
are
actually
the
credentials
you
need
for
proper
access.
A
So,
via
this
flow,
we
use
github
action
store
to
save
these
sequence,
which
means
those
azure
credentials
are
saved
in
github
action
store,
which
means
yes,
this
is
a
duplication
of
secrets
and
if
it's
saved
as
a
long-lived
secret,
I'm
sure
I'm
not
the
first
person
to
tell
you
it's
not
a
good
habit.
You
know
there
are
a
lot
of
security
concern
once
it
gets
leaked
out.
A
It's
it
can
be,
you
know,
leads
to
some
nightmares
and
definitely
it's
cloud
secrets
are
stored
as
long-lived
secrets
in
the
action
store,
which
is
not
a
good
thing,
then
definitely
there's
a
duplication
of
azure
resources
in
github
action
store.
Consider
the
scenario
right:
it's
not
just
one
workflow
and
one
repository
you
might
want
to
implement.
You
might
have
different
repositories
of
github.
What
will
you
do?
Will
you
create
different
azure
espn's
and
keep
on
saving
those
secrets?
A
Do
will
you
keep
coming
to
the
action
store
and
keep
updating
that
doesn't
look
good
to
me,
you
and
even
to
coco.
So
though,
pogo
was
able
to
access
the
resources,
but
coco
isn't
really
happy
and
satisfied,
but
wait
a
second.
We
also
understood
the
concept
called
ydc,
all
right
that
might
have
one
role
to
play,
and
yes,
you
guessed
it
right.
We
have
support
oidc
support,
which
is
now
provided
to
us
by
github,
which
will
make
this
process
more
secure.
A
How
will
this
happen?
You
know
this
has
to
be
some
trust
thing
to
be
developed
if
I'm
not
passing
a
client
secret
which
acts
as
my
id
password.
There
has
to
be
certain
other
way
to
establish
that
trust,
and
yes,
that
is
done
by
the
github
ydc
provider.
Now
we
have
so
with
this
or
dc
support.
The
new
flow
looks
like
this,
so
we
have
the
github
workflow
and
there
is
an
oidc
provider.
A
We
make
a
call
to
the
oitc
provider
and
say
that
I
want
a
id
token.
Okay,
oil
receiver
provider
will
check
from
where
this
request
is
coming.
What
is
the
repository?
Who
is
the
one
making?
The
call
which
branch
or
you
know
what
all
the
other
details
it
can
get
actually
from
any
client
and
on
the
basis
of
that,
in
a
bell
structure,
kind
of
a
json
jwt,
it
will
return
an
id
token,
so
those
id
that
id
token
will
have.
You
know
details
about
the
client
who
actually
requested
for
it
and
not
just
that.
A
It
will
also
have
a
expiry
time
so
that
you
know
we
discuss
secrets
are
not
good.
If
it's
long-lived
seeking
there
has
to
be
certain
expiry
to
ensure
properly
things
are
being
accessed
and
refreshed.
So
now
this
id
token
is
used
instead
of
the
client
secret,
which
we
were
using
before
without
the
usd
support.
So,
as
you
can
see,
the
other
parameters
are
same
client
id
tenant
id
and
subscription
id.
But
what
we
changed
is
the
client
secret
part
which
was
being
saved
in
github
action
store.
I
need
not
do
that.
A
I
have
id
token
now
so
what
happens
when
I
pass
this
id
token
to
azure
azure,
we
validate
it.
How
will
it
validate
it'll
actually
check?
You
know
if
I
have
any
pre-established
trust
with
any
of
the
oidc
providers
and,
as
I
said,
github's
oic
provider
has
now
configured
a
trust
factor
with
his
yacht
so
that
it
can
check
okay
if
you're
getting
a
token
from
issued
from
me
for
these
many
credentials
for
these
these
workflows.
You
know,
I
request
you
to
give
me
access.
A
There
are
a
lot
more
processes,
lot,
more
checks
which
azure
does,
but
in
a
high
level.
This
is
what
the
validation
process
looks
like
so
instead
of
client
secret.
I
have
the
id
token
and
I
passed
that
copo
obviously
coco
passes.
That
and
n
is
your
as
your
validates
it
based
on
certain
criteria
and
access
is
granted
so
now.
No,
I
poco
is
not
saving
anywhere.
The
secrets
in
action
store
not
duplicating
it.
Even
if
the
clan
secret
expires
or
other
credentials
expired.
A
Coco
can
easily
go
and
request
another
id
token,
because
if
the
github
workflow
is
a
valid
client
ask
for
the
pre-established
trust
between
the
azure
and
oidc
providers,
then
it
means
it
will
get
the
required
access
need,
not
worry.
I
will
also
tell
you
how
the
process
is
to
you
know,
set
up
this
trust
factor.
A
A
This
worked
for
coco,
I'm
pretty
sure
this
will
work
for
you
as
well
securing
azure
resources
using
odc
support
in
github
actions.
So
now
I
am
thinking
you
might
be
wondering
okay,
this
coco
is
happy,
but
the
coco
and
me
also
need
some
happiness
how
to
get
started.
This
is
all
well
and
good,
but
this
can
be
certain
things
which
can
be
done
by
me,
and
you
know
all
things
this
can
be
set
up.
A
So
for
this
presentation
I
have
just
scoped
it
down
to
a
bit
like
I
am
assuming
that
you
have
certain
service
deployed
to
azure
and
even
if
you're,
not
very
comfortable,
now,
writing
a
test.
Workflow
github
workflow,
with
the
shared
resources
I
will
share,
then
you
can
actually
write
those
steps
and
properly
all
the
actions.
All
the
functions
execute
all
the
scripts.
But
the
key
here
key
thing
here
is
using
the
oid
support,
for
which
you
have
to
actually
need
to
do
certain
setup
part.
A
You
know
which
will
establish
that
trust
factor
and
you
need
not
provide
the
client
secret.
So
it's
very
simple
I'll
just
share
with
you
the
steps
for
that.
As
you
can
see
in
the
screen,
this
is
a
screenshot
from
azure
portal.
You
know
you
can
go.
I
hope,
as
I
mentioned,
you
know,
using
spn
using
apps
to
access
active
directory
resources.
This
is
a
build
up
on
that,
so
you
create
an
application
in
azure
and
you
register
that
application
active
directory
application.
A
So
you
can
see
in
the
menu
bar
there's
an
option
called
certificates
and
secrets.
So
you
open
that,
and
here
you
can
actually
set
certain
secrets:
certain
certificates,
client,
secret
federation
federated
credentials.
So
these
will,
you
know,
do
the
configuration
which,
according
to
which
this
application
will
let
you
or
any
client
application
access
the
azure
resources.
A
A
A
This
is
how
it
looks
like
you
know,
once
you
click
on
that,
you
will
see
that
this
issuer.
This
is
that
oidc
provider
we
talked
about,
so
the
trust
is
established
between
that
odc
provider
and
azure.
But
now
the
request
comes
right.
So
now,
how
will
is
your
check,
whether
it's
a
valid
request
or
not?
Request
is
coming
from
the
right
odc
provider
but,
for
example,
it's
coming
from
another
repository
for
whom
I
didn't
even
intend
to
provide
the
rightful
permissions.
A
So
that's
why
the
repository
the
organization
entity
type
is
basically,
it
can
be
a
particular
environment,
a
particular
branch.
I
want
to
restrict
my
access
to
those
things
come
into
picture
and
I
said
the
pre-established
trust
factor
here
with
oidc
and
azure.
On
the
basis
of
this,
whenever
I
make
a
call
to
the
odc
provider
and
an
access
token
is
returned
to
me,
that
id
token
is
passed
to
azure
azure
validates
it
matches
these
conditions.
Whether
the
person
who
is
you
know,
person
the
workflow
or
the
any
client
app
is
requesting.
A
The
access
is
the
right
users.
Does
it
have
the
right
permissions
going
back
to
the
story
of
authentication
and
authorization?
Yes,
those
things
are
checked
and,
yes
definitely,
then
the
access
can
be
granted.
So
this
is
pretty
much
covers
about
how
we
access
the
azure
resources
using
the
github
id
oidc
support
via
github
actions.
I
hope
it
gave
you
some
clarity
and
certain
you
know
pointers
how
to
get
started.
What's
the
benefit
and
certain
terms
clarified
certain
meanings,
and
why
are
we
doing
so?
A
That's,
I
hope
that's,
but
that
part
is
pretty
clear.
Being
a
pm
myself.
Now
I'm
very
much
focused
on
the
my
part.
Why
am
I
even
giving
credentials?
Why
am
I
doing
this
free
step
again
and
for
you
also?
I
feel
it's
really
great
to
know
why
we
are
doing
things
so
that
we
are
really
inspired
to
do
that.
A
This
is
from
my
side
and
as
I
promised
to
you
in
between
the
session,
I
will
give
you
proper
resources
which
you
can
go
through,
and
you
know
just
like
the
coco
here.
Your
coco
can
be
happy.
The
coco
in
you
keep
smiling
and
is
secured
with
all
the
proper
secure
applications
that
are
there
in
place.
So
these
are
resources.
You
can
understand
more
about
microsoft,
identity
platform,
odc
know
more
about
github
actions.
This
was
the
azure
login
action
which
I
talked
about
today.
You
can
involve
a
lot
of
many
things.
A
For
example,
you
can
directly
deploy
a
web
app
to
azure
using
github
action.
You
know
a
lot
of
things
in
there
in
the
marketplace
available
for
you.
There
is
your
app
registration,
for
which
I
showed
the
trust
factor.
Establishment
comes
from
there
and
then
is
your
login
action
and
there
are
different
security
protocols
which
you
can
read
more
about.
I
hope
you
enjoyed
it
and
you
know
you
had
more
fun.
A
The
coco
is
happy,
and
you
are,
I
hope,
happy
as
well
be
free
to
share
your
feedback
with
me
and
I'll
be
happy
to
connect
with
you
and
this
url
which
I
have
provided.
I
hope
you
had
lots
of
fun
lots
of
sharing
of
good
information,
and
you
know
all
the
best
stay
secured
stay
safe
for
the
internet.
Thank
you.
So
much
bye,
bye.
C
Kanika
talked
about
the
specific
github
and
azure
scenario,
and
also
the
new
github
odc
provider
that
makes
it
possible
this
eliminates
the
need
to
have
long-lived
secrets
and
also
alleviates
the
credential
life
cycle
management
and
other
concerns
such
as
secret
leakage
and
duplication,
secure
and
easy
developers.
Are
you
listening.
B
Not
sure
about
others,
richard
developer
in
me
is
definitely
listening,
that
that's
a
good
segment
for
another
aspect
of
sdlc
right
like
if,
whenever
you're
developing
any
good
application
or
product,
how
do
you
ensure
the
reliability
of
the
product?
Of
course
the
answer
is
testing
right,
but
when
your
product
on
application
may
choose
over
time,
when
it
becomes
more
complex,
how
do
you
scale
up
testing?
It's
always
a
challenge
right
and
I
think
the
that's.
What
mayank
volley
is
going
to
teach
us
is
a
co-founder
of
lambda
test.
A
I
Hey
everyone,
it's
great
to
be
talking
at
github,
constellation
india,
and
I
hope
you
guys
are
having
a
great
time.
I
am
bang
bhula.
I
am
co-founder
in
india
products
at
lambda
test,
and
today
I
am
going
to
talk
about
a
topic
which
is
very
close
to
our
hearts.
It's
about
shift
lift
in
this
talk.
We
are
going
to
discuss
about
the
current
challenges
that
organizations
face
while
moving
to
shift
left
some
of
the
solutions
that
are
out
in
the
market
and
what
are
the
ingenious
space
on
which
research
is
currently
happening.
I
I
I
From
the
outset,
it
looks
like
that
we
are
only
going
to
test
out
when
we
are
going
to
put
out
something
or
design
something.
It
actually
means
that
the
testing
should
start
from
the
planning
phase,
and
this
is
where
the
test
driven
development
and
the
cucumber
emergence
had
all
started.
So
if
you
look
at
this
diagram,
it's
very
very
clear
that,
starting
from
the
planning
until
we
deploy
our
code
on
the
production,
we
have
to
test
on
every
single
phase.
So
this
is
what
exactly
a
shift
left.
Testing
means.
I
And
there
are
lots
of
research
that
happened
over
the
last
two
decades
that
clearly
show
that
the
bug
that
is
caught
while
in
the
development
phase
cost
much
less
than
it
would
have
been
found
in
the
production,
and
there
are
lots
of
memes
out
there
on
the
internet.
I
While
in
the
development
phase,
the
numbers
might
not
be
realistic,
but
the
difference
between
them
is
representative
of
the
scale
of
what
companies
and
organizations
can
save
if
they
can
fix
the
bugs
while
right
in
the
development
phase,
and
that
all
creates
a
premise
of
why
shift
web
testing
actually
matters.
I
And
before
you
move
on
to
the
reaction
phase,
the
startup
culture,
the
agile
methodologies,
the
fierce
competition
in
the
market
and
the
urge
to
push
out
several
times
in
a
day
and
actually
led
to
this
movement
and
actually
accelerated
in
the
recent
times.
So
now,
when
there
are
multiple
competitors
outside
who
are
pushing
code
faster
than
you,
then
it
becomes
the
responsibility
of
an
organization
to
keep
face
at
the
competition
that
they
are
facing
and
ultimately
leads
to
shipping
more
frequently
and
that
ultimately
boils
down
to
testing
even
more
frequently.
I
Now,
let's
talk
about
the
reaction
it
has
so
on.
We
saw
what
the
good
side
it
had,
but
let's
also
have
a
look
at
what
what
bad
side
this
shifting
test
actually
means
to
the
organizations.
First
of
all,
we
have
opened
the
floodgates.
We
are
saying
that
we
need
to
write
more
and
more
test
cases,
but
ultimately
that
adds
up
the
pressure
on
the
code
repository
since
you
open
the
floodgates.
I
I
And
finally,
if
there's
so
many
test,
cases
are
written,
it's
not
actually
possible
to
run
them
all.
You'll
see
the
reason
the
next
slide,
but
what
it
actually
means
that
the
organization
are
forced
to
run
the
test
cases
on
the
batch
of
commits
now
for
a
larger
organization
who
is
committing
100
times
in
an
hour,
it's
not
possible
for
them
to
run
all
the
test
case
at
all
the
times.
I
I
For
example,
we
start
seeing
high
test
execution
time
which
actually
leads
to
the
clock,
ci
pipelines
now
an
organization
who
used
to
commit
100
times
in
an
hour
now
yeah,
it's
not
possible
for
them
to
spend
multiple
minutes
on
every
commit,
because
now,
since
more
and
more
test
cases
are
being
added,
there
will
be
a
time
in
which
it
will
not
be
possible
to
run
the
ci
for
every
commit
and
that
can
lead
to
multiple
problems.
One
velocity
leakage.
I
If
a
single
ci
is
blocking
your
resources,
you
will
be
limited
in
the
number
of
parallel.
Ci
plus
there'll
be
a
huge
queue
of
developers
who
will
be
waiting
on
each
other
just
to
run
their
jobs.
That's
a
huge
risk,
the
organization
if
they
leak
the
velocity,
the
entire
momentum,
is
going
to
get
suffered.
I
Secondly,
since
we
do
not
have
control
over
the
flaky
test
cases,
that
means
that
there's
a
lack
of
quality
control
and
ultimately
it
leads
to
the
lack
of
confidence
in
the
testing
itself
and
what
organizations
start
seeing
is
that
developers
try
to
find
some
hack
ways
in
which
they
are
able
to
skip
the
test
cases,
because
they
have
hard
deadlines.
They
need
to
push
out
code
very,
very
frequently,
and
the
ci
is
not
helping
them
out.
I
They'll
try
to
dive
into
some
other
directions
in
which
they
are
able
to
maybe
skip
the
test
cases
which
actually
shatters
down
the
whole
methodology
of
testing
itself
and
then,
ultimately,
when
everything
of
this
kind
happens,
it's
very
hard
to
measure
the
roi
of
all
the
expenditure
that
is
going
into
the
testing
infra
the
resources,
the
manpower,
the
strategy,
and
it
poses
a
question.
Why
are
we
even
testing.
I
I
So
now,
when
the
rest
of
the
organization,
the
rest
of
the
world
have
started
shifting
left,
they
are
not
without
any
solutions.
There
are
some.
There
are
some
small
pieces
in
which
they
can
actually
try
to
solve
this
problem
in
some
miniature
ways,
and
what
currently
organizations
do
is
that
they
try
to
stitch
away
these
smaller
solution
to
make
them
work
for
them.
I
Essentially,
in
the
nutshell,
it
actually
means
that
there
is
no
standard
solution
out
there,
but
there
are
some
bits
and
pieces
which
have
to
be
converted
into
an
add-on
solution
for
every
organization.
The
bad
part
about
is
that
this
every
time
an
organization
reaches
a
scale
where
they
have
to
think
about
solutions
for
these
problems.
They
have
to
reinvent
the
wheel.
I
So
one
of
the
most
apparent
solution
in
this
case
is
that.
Why
are
we
running
all
the
test
cases?
If
we
have
an
in
search
of
large
amount
of
test
cases?
Does
it
make
sense
to
run
all
of
them
on
every
time?
Obviously,
the
answer
is
no,
but
how
do
we
do
that?
Let's
start
about
start
discussing
about
what
people
have
been
doing
since
inception.
I
The
most
earliest
example
of
selective
testing
was
done
using
tags,
and
traditionally
the
qa
teams
and
the
developer
teams
have
started
labeling
their
test
cases
and
they
only
used
to
run
a
particular
tag
on
a
particular
stage
of
the
sdlc.
For
example,
they
used
to
run
smoke
tests
on
the
staging
servers
they
used
to
test
regression
on
the
pre-brought
servers.
I
So
it
was
a
very
introductory
phase
of
this
upset
selection
of
the
test
cases,
but
that
was
definitely
not
at
all
effective
up.
The
moon
on
rapists
came
into
the
picture.
I
All
the
mono
rappers
are
not
targeted
towards
solving
this
problem,
particularly,
but
they
did
provide
some
of
the
directives
in
which
we
could
say
that
if
only
one
of
the
pass
up
package
is
getting
impacted,
we
do
not
need
to
run
the
test
cases
of
all
other
package,
but
then,
in
terms
of
accuracy,
this
is
totally
incorrect
because
sub
packages
might
have
interdependencies
in
them.
So
if
we
just
run
the
test
cases
of
the
embedded
package,
we
are
at
risk
of
not
running
the
essential
test
cases.
I
I
Facebook
now
called
meta
is,
is
using
an
ai
to
train
every
single
line
of
comment
that
is
going
to
their
system
and
the
system
is
able
to
pinpoint
the
exact
subset
of
the
test
cases
that
need
to
run
now.
I
Obviously,
this
is
working
at
scale
for
them,
but
if
you
think
about
a
solution,
this
is
not
well
fit
for
a
small
startup
or
an
organization
which
doesn't
have
that
many
resources
in
terms
of
the
machine
learning
in
the
ai
and
secondly,
since
the
code
of
every
organization
is
very
typical
to
them,
there
is
no
possibility
of
a
generic
ai
solution
that
can
fit
over
every
every
code
base.
So
if
an
organization
goes
this
way,
they
have
to
actually
work
harder.
I
I
But
we'll
try
to
see
in
a
little
bit
detail
like
how
test
impact
analysis
can
actually
work
for
you.
So
if
you
can
actually
parse
the
source
code
and
actually
see
that
which
all
test
cases
are
dependent
on
virtual
modules.
Now
this
is
very
easy
from
doing
static
point
of
view.
I
So
if
we
can
somehow
take
away
this
graph
from
these
module
resolvers
and
attach
it
to
the
test
running
system,
we
can
get
a
way
in
which
we
can
decide
which
test
case
to
run,
and
this
is
exactly
how
the
gesture
is
doing.
This
is
a
very
famous
open
source
testing
tool
for
javascript
and
it
exactly
uses
this
kind
of
dependency
mapping
and
it
can
decide
which
test
got
impacted,
but
unfortunately
it
doesn't
work
on
the
ci
systems,
but
yeah.
It's
a
good.
It's
a
good
step
in
this
direction.
I
So
currently,
if
you
see
what
the
status
quo
of
tia
it
started
with,
the
google
tester,
it's
a
very
old
tool
that
google
itself
released
over
junit
and
it
took
and
it
used
to
selectively
test
only
those
test
cases
which
got
impacted
and
it
was
failing
previously.
The
second
attempt
was
made
by
thoughtworks.
It
was
called
protest
and
it
actually
used
to
orchestrate
the
test
cases
in
the
load
balanced
manner
and
also
used
to
run
only
those
test
cases
which
got
failed.
Last
time.
I
It
was
followed
by
j
j
test
me.
It
again
is
released
by
thoughtworks
and
it
was
piggybacking
over
the
java
testing
tooling
and
it
provided
up
to
a
certain
extent,
a
way
to
run
only
those
test
cases
which
got
impacted
by
the
previous
commit.
I
And
if
you
talk
about
the
latest
tools,
azure
pipeline
tiaa
is
already
available
in
the
market,
although
it's
a
commercial
tool,
but
it
does
provide
these
advanced
functionalities
where
the
end
developers
do
not
even
have
to
do
anything
extra.
In
order
to
enable
it,
the
platform
itself
is
able
to
decide
the
dependency
graphs
it
able
to
selectively
test
the
impacted
test
case.
In
real
time,
which
is
a
very
great
improvement
for
it
and
then
followed
by
the
chest,.
I
Now,
let's
talk
about
the
second
pressing
top
concerning
topic
about
flakiness
flakiness
is
everywhere
it's
not
about
just
shift
left,
but
it
becomes
even
more
important
because
now,
even
more
and
more
test
cases
are
entering
into
the
system.
Let's,
let's
see
like
how
people
have
been
combating
the
flakiness,
so
the
most
trivial
and
the
most
archived
form
of
flaky
dust
detection
management
was
that
whenever
any
test
case
could
fail,
team
used
to
manually
run
only
those
test
cases.
I
I
Then
it
became
a
little
bit
more
evolved
and
the
platform
and
the
frameworks
were
able
to
correlate
the
test.
Logic,
changes
to
the
test
results
and
if
there
was
no
change
in
the
test,
logic
or
result,
and
still
the
test
case
was
giving
different
outcomes,
the
platform
or
the
framework
would
mark
it
as
flicky,
and
that
was
just
the
beginning
of
this
empowerment
to
the
end
user.
I
An
end
user
was
now
is
now
more
aware
that,
yes,
some
test
cases
are
flaky
without
this
happening
in
the
production
earlier,
because
before
these
tools,
it
was
very
difficult
for
somebody
to
figure
out
that
it
was
a
flaky
or
not,
but
with
these
technologies
it's
actually
possible
for
them
to
get
visibility
that,
yes,
certain
test
cases
are
flaky.
I
I
I
The
little
bit
advanced
approach
over
this
is
jumbling
the
order
of
the
test
cases,
because
multiple
times
developers
only
assume
that
the
tests
are
going
to
run
in
order
and
when
they
don't
the
test
starts
showing
some
flakiness.
I
So
if
we
can
exponentially
degrade
the
resources,
for
example,
if
our
test
case
is
running
fine
or
one
gig
of
memory,
we
could
again
try
it
at
750,
mb
500
me
and
then
go
down
till
we
find
that
at
which
stage
the
test
starts
immediately,
failing
or
becoming
flaky,
and
the
most
state
of
the
art
ways
to
test
out
flaky
tests
is
the
mutation
testing
mutation
testing
takes
an
interesting
turn
in
which
it
dynamically
changes.
The
underlying
code
under
test
to
figure
out.
I
I
And
finally,
with
all
these
test
cases
coming
into
the
system
and
having
the
flakiness,
it
becomes
imperative
that
we
have
a
huge
amount
of
visibility
and
observability,
because
there
are
so
many
test
cases,
it's
not
possible
to
view
them
under
a
single
report
or
just
once-
and
this
gives
rise
to
the
new
section.
Called
observability
and
multiple
tools
out
there
in
the
market
have
started
having
this
kind
of
plugins
and
features
where
you
can
actually
see
the
entire
history
of
the
test.
I
This
is
an
example
of
how
sound
cloud
is
doing
the
analytics
and
they
have
been
using
grafana,
backed
by
some
data
sources,
and
we
can
see
that
the
amount
of
effort
it
takes,
but
it's
actually
worth
it
because
now
it's
very
very
easily
visible
which
test
cases
are
becoming
flaky,
which
are
having
the
most
bottlenecks
and
how
many
times
a
particular
test
case
is
run.
So
any
organization
which
is
running
testing
at
scale
and
it's
aggressively
adding
test
cases.
I
It's
very
important
for
them
to
store
the
every
execution
of
a
test
case
for
entire
history,
and
that
is
where
the
pattern
start
starts
to
spruce
up
so
either
manually
or
by
some
third
party.
It's
actually
recommended
for
them
to
try
out
these
tools
and
maybe
store
the
results
to
see
these
important
patterns.
I
And,
finally,
this
is
an
example
of
how
spotify
is
storing
the
test
results
and
creating
their
ad
hoc
visualizations,
so
that
it's
very,
very
easy
for
them
to
figure
out
which
test
case
is
flaky,
and
this
is
in
my,
in
my
opinion,
a
very
interesting
view
to
actually
see
have
a
bird's
eye
view
across
your
testing.
I
It's
our
it's
our
take
on
this
problem
and
we
have
open
source
solution
over
github.
Currently,
this
platform
supports
all
the
major
java
javascript
frameworks
like
just
moka
and
jest
and
jasmine,
and
we
are
going
to
release
integrations
for
the
other
languages
like
java,
python
golang
very
very
soon,
so
essentially,
test
at
scale
is
solving
all
the
three
previously
mentioned
problems
of
selectively
running
some
test.
Cases.
I
Second,
is
combating
the
flaky
test,
using
the
all
the
strategies,
as
well
as
providing
an
open
framework
in
which
an
organization
can
construct
their
own
kind
of
testing
policy
for
their
flakiness
combat
and
finally,
the
observability,
which
actually
provides
you
the
needle
in
the
haystack
problems
and
obviously
tells
you
which
test
cases
is
for
is
going
to
become
flaky
by
using
some
trend
analysis
and
by
using
anomaly
detections
I'll.
Try
to
show
some
example
of
what
a
start
scale
is
providing
in
terms
of
the
analytics.
I
So,
first
of
all
this
view-
and
this
is
definitely
inspired
by
what
spotify
is
doing-
and
you
feel
that
this
kind
of
interface
will
be
helpful
to
multiple
organizations
out
there-
it's
very
very
easy
to
figure
out
any
platform
level
issues
or
some
degradations
and
some
anomalies,
and
to
actually
get
the
roi.
How
many
times
did
we
actually
run
the
test
case
and
what
has
been
the
result?
If
you
can
notice
the
above
green
line,
it
actually
means
that
a
developer
has
explicitly
skipped
a
test
case,
which
is
a
very
concerning
point.
I
And
finally,
we
also
provide
analytics
over
the
system.
Consumption
example,
the
memory
and
the
cpu.
This,
in
our
opinion,
is
a
very
important
metric,
because
if
there
is
some
anomaly
detection
in
these
values,
we
can
actually
detect
a
test
case
can
become
flaky
later
on.
This
is
very
helpful
in
time.
Saving.
I
This
is
an
another
interesting
view
which
actually
shows
the
ratio
of
the
impacted
test
case.
So
we
can
see
that
the
impacted
test
case
constitutes
a
very
smaller
percentage
of
the
entire
testing
suite,
and
when
this
happens,
it's
actually
quite
apparent
the
value
of
running
only
subset
of
the
test
cases.
I
And
that
was
it.
I
hope
I
try
to
bring
back
the
point
of
why
shift
of
testing
is
important.
What
are
the
current
problems?
Are
the
industries
facing
some
of
the
common
solutions
and
our
take
on
the
testosterone
problem,
and
we
definitely
hope
that
the
stat
scale
github
repository
and
is
going
to
help
us
and
other
organizations
is
much
more
variety
of
ways
and
help
us
embrace
testing
even
more
fiercely.
B
Wow,
that
was
an
insightful
session
like
I
never
thought.
Testing
can
become
this
very
complex
right.
The
amount
of
strategies
that
one
can
induct
into
the
testing
wow.
That's
amazing
right,
like
you
know
you
can
bring
in
ai,
and
you
can
think
about
how
testing
the
test
cases
and
you
can
also
think
about
choosing
the
right
set
of
test
cases
all
for
one
thing
right:
the
need
for
speed,
there's
increased
need
for
velocity
right,
so
we
need
to
adopt
the
strategies.
Thank
you
mike
for
educating
us
richard.
What
do
you
think.
C
I
totally
agree,
you
know,
being
an
engineer
I
can
relate
to
this.
You
know
testing
is
such
a
critical
part
of
the
journey
and
often
something
that
we
should
be
more
intentional
about.
You
know
whether
it's
about
smarter
testing
or
more
reliable
testing.
You
know
the
testing.
The
tests
was
a
good
one
to
take
away
and
yeah
so,
like
I
said,
very,
very
important
part
of
the
devops
journey
and
we're
going
to
hear
a
little
bit
more
about.
You
know
things
that
developers
care
about
in
in
that
journey.
C
Yeah
looks
like
we
have
someone,
you
know
who's
been
looking
at
some
of
our
other
tracks.
We
have
a
lot
of
stuff
going
on
on
the
security
side,
and
you
know
a
lot
yeah.
Really
one
of
the
pillars
at
github
is
about.
You
know,
trust
by
design,
and
I
hope
you
all
have
been
looking
at
some
of
the
sessions
on
the
security
front.
That
will
tell
you
more
about
you
know
how
github
is,
is
empowering
developers
to
write
secure
code.
B
Chamod,
what
devops
track
just
started
nice
thanks
for
showing
that
energy?
C
Right,
let's
see
what's
up
next,
so
you
know,
people
are
always
curious
about
how
github
does
its
own
devops
right
developers
are
always
looking
for
ways
to
make
their
pipelines
more
efficient
and
more
reliable.
C
So
in
the
next
section
session
we
are
going
to
have
trilok,
who
is
the
director
of
engineering
at
github
and
who
is
going
to
tell
us
more
about
the
mechanisms
that
github
uses
to
increase
deployment
frequency
or
to
reduce
lead
times
for
pushing
changes
to
production.
You'll,
hear
about
approaches
for
feature
rollout,
improved,
ci,
cd
and
also
faster
development
using
code
spaces.
F
Hi
welcome
to
github
constellation
india
day
2..
I'm
super
excited
to
present
this
session
on
how
github
does
devops,
let's
get
into
the
session
hi.
Everyone
welcome
to
this
session.
Devops
is
the
union
of
people
process
and
technology
to
continuously
provide
values
to
customers.
F
Devops
enabled
rules
like
development
idea
operations,
quality
to
coordinate
and
collaborate
to
produce
better,
more
reliable
products.
F
This
is
the
agenda
that
we
will
cover
today.
We
will
start
with
the
common
metrics,
typically
tracked,
to
achieve
good
devops.
We
will
then
understand
how
these
metrics
are
met
by
discussing
the
mechanisms
of
feature
rollout,
continuous
integration
and
deployments,
and
finally,
faster
development
using
code
spaces.
F
My
name
is
trelok
and
I
am
a
director
of
engineering
creating
teams
in
github
actions
across
evac
region.
I
love
developing
applications
to
help
make
our
lives
and
society
a
better
place.
I
seek
new
problems
and
challenges
to
solve
on
a
regular
basis.
I
work
in
github
actions
and
helping
customers
realize
our
vision
of
bitter
accents
is
used
by
every
developer
to
automate
their
entire
software
delivery
workflow.
I
love
adventures
and
venture
into
nature
more
often
than
with
hiking
new
trekking
activities,
love
animals
and
I
have
several
pets
at
home.
F
F
The
first
one
is:
how
frequently
is
the
team
deploying
some
of
the
questions
that
we
can
ask
is
how
are
there
any
individuals
or
blockers,
or
do
we
have
to
do
too
many
manual
processes
before
the
team
deploys
to
production
and
the
lead
time
is
measured
in
getting
a
code
to
successfully
run
in
production.
This
touches
on
some
best
ci
cd
practices
and
automations.
F
The
last
two
metrics
are
focused
on
achieving
a
healthy
and
available
system
to
gain
customer
trust.
Incidents
do
happen,
but
the
important
thing
is
how
fast
we
can
close
on
the
time
to
detect
and
time
to
mitigate
in
order
to
recover
from
these
incidents
and
get
the
system
back
to
a
good
state.
As
we
deploy
frequently
devops
practices
make
sure
we
have
the
right
mechanisms
to
roll
back
and
proactively
reduce
the
blast
radius
of
the
changes
being
pushed.
F
Feature
flags
are
part
of
safe
deployment
practices,
we
use
feature
flags
a
lot
and
we
use
feature
flags
to
reduce
the
risk
of
deploying
to
production.
Any
potentially
risky
change
is
put
behind
a
feature
flag
in
the
code
and
when
a
deployment
is
done,
we
enable
the
feature
flag
to
every
one
or
two
percentage
of
actors
to
minimize
the
impact
of
changes.
F
If
something
goes
wrong,
we
can
always
disable
the
feature
flag
completely
in
a
matter
of
seconds,
without
interrupting
other
deployments.
If
there
were
no
feature
flags,
you
probably
would
have
to
revert
the
pr
or
rollback
deployment.
We,
of
course,
do
these
mechanisms
as
well,
but
feature
flags
are
something
that
is
used
across
many
teams
in
guitar.
F
We
don't
use
long
lead
feature
plans.
That
is
also
one
of
the
fundamental
principle
that
we
do
to
achieve:
good
devops
practices.
Instead,
we
use
shortly
feature
branches
with
feature
flags
as
a
blast
control
mechanisms.
Why
do
we
use
law?
Why
don't?
We
use
a
long
line
feature
branches,
because
small
batches
are
easier
to
review
and
engineers
can
do
a
small
pull
request
and
generally
the
smaller,
the
change
that
I
said
the
chance
to
get
something
really
wrong
in
production
deployment
and,
finally,.
A
F
Using
this
long
long
live
feature
branches,
avoid
a
lot
of
potential,
merge,
conflicts
and
clashes.
That
can
happen
with
other
features
that
are
currently
under
development.
F
Feature
flags
also
helps
us
to
test
and
have
more
control
on
the
testing
feature
in
our
development
environments,
we
can
toggle
feature
flags
from
the
command
line
we
can
disable
and
enable,
as
the
application
is
running,
we
do
the
same
thing
in
automated
tests
as
well.
In
our
long
running
tests
and
a
short
running
required
test
in
our
ca.
Also,
we
have
two
different
builds,
one
that
runs
with
the
feature
flags
disabled
by
default
and
another
one
that
runs
with
all
the
feature
flags
enabled
by
default.
F
F
Once
we
have
understood
feature
flags,
we
can
now
understand
how
it
benefits
in
defining
the
shipping
strategy.
Features
developed
in
github
typically
go
through
different
phases.
Initially,
we
use
this.
We
enable
it
for
individual
actors
where
we
flag
employees
working
on
a
feature
to
enable
the
flag
to
customers
that
are
experimenting
the
feature
and
they
are
trying
to
have
a
problem
we
are
trying
to
fix.
After
that,
the
feature
flag
is
enabled
at
a
staff
level
and
by
staff
level.
F
The
third
stage
is
the
beta
stage
where,
when
you
are
about
to
release
an
important
feature
that
will
impact,
let's
say
open,
source,
maintainers
or
other
type
of
users,
we
want
to
test
this
feature
with
a
small
group.
First,
after
some
time,
we
may
interview
them
gather
feedback
and
validate
the
implementation
and
the
four
stages
we
enable
for
a
percentage
of
actors,
which
is
a
group,
and
we
enable
this
in
order
to
get
more
data
and
more
testing
done
and
when
the
percentage
of
factors
is
changed
is
sent
to
our
deployment
channel.
F
So
engineers
are
aware
of
this
change,
so
we
know
that
we
have
increased
the
scope
of
this
feature
and
the
last
stage
of
deployment
is
dart
shipping,
which
allows
us
to
enable
the
feature
flag
to
percentage
of
pulse.
This
is
different
from
previous
mechanisms,
because
the
actor
or
the
user
can
get
the
feature
enabled
in
one
call
it's
not
sticky
and
in
another
call
the
feature
might
not
be
enabled.
F
This
mechanism
is
not
meant
for
features
that
are
visible
for
all
users,
but
for
changes
like
performance
improvement
in
query
and
whatnot,
and
once
the
feature
goes
through
all
these
stages,
some
top
stages
are
optional,
of
course,
like
dark
shipping
and
features
are
made
generally
available,
and
once
the
features
are
generally
available,
a
blog
post
is
typically
published
and
feature
flag
is
kept
for
some
time.
F
For
easy
role
back
and
don't
worry,
we
have
some
mechanisms
like
linting,
which
we
and
tooling
around
github
actions
to
to
pay
the
detected
that
is
incurred
due
to
creation
of
the
feature
flags.
F
We
use
shadows
primarily
to
interface
and
execute
devops
command.
This
can
be
integrated
in
slack
and
other
channels,
and
the
developers
can
use
slack
commands
to
run
ci
bills,
queue
builds
and
trains
and
deploy
and
monitor
code
in
production.
You.
What
is
what
powers
this
chat
ups
command,
feel
free
to
check
out
what
you
brought
us
into.
What
github.com
we'll
see
examples
of
word
comments
in
the
next
slides.
This
is
a
chat
bot
that
was
built
by
github
and
it's
open
source.
F
We
identified
the
developers
typically
waited
for
45
minutes
for
a
successful
run
of
our
continuous
integration
suit
to
complete
before
merging
any
change.
This
45
minute
lead
time
was
repeated
once
again
before
deploying
a
merge
branch.
So
in
a
typical
scenario,
developers
would
have
to
wait
for
two
hours
like
which
is
really
insane
and
deployment
frequency
is
really
high
in
github,
and
two
hours
is
really
painful
for
a
developer
to
wait
before
pushing
their
changes.
So
this
is
what
we
have
done.
F
F
And
also
to
understand
the
optimization,
I
need
to
introduce
you
to
something
called
github
enterprise
server
github
enterprise
server
is
our
on-premise
offering
of
github.
Github.Com
is
our
online
and
we
have
github
enterprise
server,
which
is
our
on-premise,
and
we
ship
a
new
patch
release
every
two
weeks
and
a
major
release
every
quarter
to
github
enterprise
servers.
We
had
two
long-running
test
suite
added
to
the
ci
workflow
to
ensure
the
pull
request
did
not
break
the
github
enterprise
experience
for
our
enterprise
customers.
F
It
was
also
clear
that
this
45-minute
test
suite
that
we
added
for
enterprise
did
not
really
provide
value
for
github.com
deployments
and
github.com
deployments.
Like
happened
several
times
throughout
the
day
and
again,
driven
by
customer
obsession
and
developer
satisfaction,
we
have
developed
something
called
deferred
complaints
to
to
save
time
with
ci
and
I'll
explain
the
different
complaints
to
in
the
next
slide.
F
So
deferred
compliance
tool
is
what
we
did
to
reduce
the
ci
time.
Deferred
component
tool
is
integrated
with
rca
workflow
systems,
and
it
strikes
a
balance
between
improving
the
lead
time
for
a
change
in
deploying
github.com,
at
the
same
time,
creating
accountability
for
for
the
quality
of
enterprise
server.
F
F
As
you
can
see,
the
developer
creates
a
pr
to
merge
the
change
and
required
ci
jobs
and
non-required,
say
jobs
are
started
in
parallel.
If
required,
jobs
are
completed.
The
well
and
good.
The
pull
request
is
merged
and
deployed,
but
say
if
a
long-running
ci
job
fails.
I
get
an
issue
with
the
deferred
complaints
label
is
created
and
the
pull
request
author
and
code
segment
owner
code
owners
are
tagged
so
that
they
can
take
an
action.
We
have
a
warning
message
also
sent
to
the
slack
to
the
developer
and
the
72
hour.
F
Timer
is
kicked
off.
The
developer
now
has
72
hours
to
fix
the
bed
and
push
a
chain
or
reward
the
pull
request
either
three
of
them
a
successful
run
of
the
ca
job
automatically
closes
the
complaints
issue
and
the
70
to
72
hour.
Timer
is
turned
off,
but
if
the
ca
job
remains
broken,
say
more
than
72
hours,
all
deployments
to
github.com
are
halted,
barring
any
exceptional
situations
right
so
until
the
integration
tests
are
fixed
on
the
enterprise
server.
F
This
72-hour
timer
is
customizable,
but
our
analysis
showed
that
72
hours
is
good
enough
because
we
have
developers
ranging
across
time
zones
starting
from
developer,
checking
in
something
on
san
francisco
on
a
friday,
and
you
know
friday
afternoon,
and
they
don't
unintentionally
block
a
developer
who
is
going
to
start
to
start
their
day
on
sydney
on
a
monday
morning.
So
that
is
something
that
72
hours
really
keeps
track.
Well,.
F
Now,
let's
look
at
deployments
and
how
github
does
deployments
there
are
some
really
good
concepts
like
github,
clear
deploy
trains
and
merge
queues
that
I'll
touch
upon
the
first
and
foremost
is
to
understand
what
is
a
deploy?
Model
branches
are
deployed
before
merging
them
to
mean
so
we
don't
merge
and
then
deploy.
Instead,
we
first
deploy
and
then
merge
to
main
this
meant
that
the
developers
can
add
changes
to
a
queue,
change,
the
status
of
a
queue
and
organize
groups
of
pull
requests
to
be
deployed
and
worse.
F
So
that
gives
more
flexibility
and
that's
the
reason
we
choose
this
model
for
deployments.
We
use
drains
to
conduct
build
deployments
before
deploying
a
build.
There
is
a
checklist
that
the
developers
have
to
go
through.
That
includes
write
code,
reviews,
linkings
unit
tests
and
whatnot,
but
once
they
are
ready
to
start
the
deploy,
they
join
the
deploy
queue
by
a
chat,
ops,
command
keyboard
again
comes
in
a
picture
and
notifies.
When
it's
your
turn
to
deploy
to
save
time,
you
don't
have
to
keep
looking
at
your
slack
channel.
F
Approval
checks
from
the
review
group
group
are
needed
and
the
developers
are
also
recommended
to
test
their
changes
in
labs
environment
and
they
do
one
last
check
by
verifying
the
feature
flags
and
once
the
check
is
done,
the
rollout
happens
and
we
use
canary
and
auto
deploy
the
progress
that
we
roll
out
the
traffic.
I
will
explain
that
in
the
next
slide
as
well,
but
we
start
with
two
percent
camry
20
camry
and
then
followed
by
100
production
deploy.
So
that
is
to
make
sure
that
you
know
the
impact
is
not
huge
on
customers.
F
So
this
all
functioned
using
chat,
ops
in
a
slack
in
the
room
called
dot
com,
ops,
which
is
an
internal
slack
channel.
While
this
is
a
very
simple
system,
it
really.
C
F
Confusing
because
there
are
hundreds
and
thousands
of
messages
in
a
single
chat
room
to
manage
the
queue
to
deploy,
and
you
know
to
monitor,
what's
happening
and
all
of
a
sudden.
This
channel,
which
also
crucial
information
system,
is
just
being
overwhelmed
and
developers
could
no
longer
track
their
change
through
the
system
which
resulted
like
in
reduced
capacity
for
the
developer
and
an
increased
risk
profile
for
github.
So
we
have
to
do
something
about
it
and
we
were
able
to
create
some
mechanisms
around
it.
Well,
which
I'll
explain
in
the
next
slide.
F
So
you
can
see
the
pain
that
developers
would
have
to
go
through.
This
is
just
one
small
snippet,
where
chat
ops
display
the
statuses
of
your
dream,
and
this
is
one
step
of
about
a
dozen
messages
spread
across
hundreds
of
messages
and
hundreds
of
threads,
so
it's
really
hard
to
keep
track
and
validate
the
state
of
the
deploy.
So
this
is
a
good
justification
for
us
to
do
some
improvements
in
this
area.
F
The
entire
system
like
for
production
deployment,
works
like
a
state
machine.
This
diagram
shows
an
automatic
progression
between,
let's
say
two
percent
canary
20
canary
production,
deploy
and
the
ready
to
merge
stage.
There
is
a
five
minute
timer
that
separates
each
of
these
deployments
and,
finally,
a
pointer
is
automated
to
progress.
Accord
across
the
data
model
after
the
timer
gets
completed
and
stages
were
deployed.
So
this
is
our
stage
by
stage
canary
based
deployment
pipeline.
F
So
what
resulted
was
a
state
machine
back
deploy
system?
So
we
know
that
it's
a
state
machine
and
what
we
did
is
we
converted
that
to
a
ui
component.
This
was
combined
with
the
traditional
chat
up,
so
we
have
the
goodness
of
both,
and
you
can
see
that
overview
of
deploys
which
have
recently
been
deployed
to
github.com
and
rather
than
tracking
down
various
messages
in
a
noisy
slack
channel,
you
can
go
to
consolidated
ui.
You
can
see
the
state
machine
progression
mentioned
above
in
the
right
on
the
right.
F
Drilling
down
into
specific
deployment,
so
the
previous
slide
was
the
list
of
our
deployments.
But
now,
when
we
drill
down
into
each
deployment,
you
can
see
everything
has
happened
during
a
specific
deployment.
You
can
see
that
each
stage
of
deployment
has
a
five
minute
timer
between
them.
You
can
pause
the
deployment
to
give
a
developer
more
time
to
test
in
case
something
goes
wrong.
We
have
a
quick
way
to
roll
back
in
this
ui
as
well
with
the
drop
down
in
the
right
top
corner.
So
that's
another
good
thing
that
this
ui
provides.
F
Finally,
the
entire
system
could
be
monitored
and
started
from
slack.
You
know
just
like
before
we
found
that
this
is
how
developers
typically
want
to
start
their
deploys
and
they
would
go
to
the
ui
to
monitor
in
the
ui
component,
but
then
they
would
come
back
to
the
slack
center
to
to
understand
the
collaboration
and
to
see
the
conversations
that
are
happening
for
every
deployment.
So
it's
a
combination
of
slack
channel
and
the
ui
that
we
use
to
do
the
requirements.
F
Of
course,
we
also
look
at
conf
deployment
dashboards
to
gain
confidence,
which
includes
looking
at
canary
progress
response
times
and
statuses,
slos
application,
health
database
health
and
we
are
also
constantly
surfacing
alerts
during
and
after
the
deployment
in
the
command
channel.
And
finally,
we
use
data
dog
incentive
for
monitoring
alerts
and
issues.
F
So
now
that
we
have
understood
the
deployment
practices,
let's
go
into
the
local
development
and
how
do
we
make
sure
that
the
lead
time
is
increased
via
code
spaces?
So
we
use
codespaces,
which
is
one
of
the
github
products,
do
check
out
codespaces.
If
you
have
not
already
checked
it
out,
guitar
is
being
built
using
code
spaces,
so
bit
of
developers
typically
do
the
local
code
setup
using
github
code
spaces.
F
F
And
most
of
the
comments
are
coming
from
mac
os.
So
till
date
we
have
millions
of
commits,
and
that
means
that
you
know
the
scale
is
pretty
high.
Even
for
local
development,
we
have
a
paradigm
in
github
called
scripts
to
rule
them
all.
Once
git
clone
is
done.
There
are
certain
bootstrap
scripts
that
are
run
to
setup
and
start
the
server,
and
this
is
common
across
several
tools
and
microservices
across
github,
not
just
github.com.
F
Typically,
it
used
to
take
like
a
half
a
day's
time
to
run
the
scripts,
and
sometimes
things
go
wrong
and
people
had
to
open
a
new
issue
and
then
internal
support
had
to
pitch
in
you
know
it
used
to
be
a
very
manual
process
when
you
followed
this
kind
of
scripts
manually,
and
we
also
had
this
nuke
from
orbit,
which
is
a
command
when
we
used
with
the
script,
to
bring
back
the
local
development
to
a
clean
state.
Let's
say
if
something
really
went
wrong
with
your
deployment
or
with
your
local
setup.
F
So
some
more
stats
about
github.com,
so
we
have,
we
have
13
gb,
that
is
the
source
code
size
on
disk.
Github.Com
takes
13
gb
and
it
takes
20
minutes
for
cloning.
So
this
is
a
lot,
and
that
means
that
github.com
typically
meant
45
minutes
for
bootstrapping.
So
it
takes
45
minutes
for
the
entire
bit
of
code
base
to
bootstrap.
So
there's
a
lot
of
time
and
what
was
our
goal?
We
had
to
bring
it
down,
so
we
did
certain
things
behind
the
scenes.
F
Of
course
we
use
code
spaces
and
we
wanted
to
bring
down
the
time
taken
from
45
minutes.
One
optimization
we
did
was
use
shallow
clone
many
times.
The
developer
working
on
github
code
doesn't
need
the
full
get
history
right,
so
they
just
need
the
initial
few
parts
of
the
head.
So
that's
the
reason
we
did
a
shallow
clone,
which
brought
the
overall
time
to
20
minutes
the
bootstrapping
time
from
20
minutes
to
90
seconds.
F
So
that's
a
huge
improvement
that
was
achieved
and
we
also
improved
the
bootstrapping
further
by
doing
something
called
github
actions
and
we
run
a
nightly,
build
and
push
docker
images
and
dependencies.
So
this
is
like
pre-building,
and
this
saves
time
for
bootstrapping
as
the
docker
images
are
available
already.
So,
with
the
combination
of
shallow
clone
and
github
actions,
we
were
able
to
reduce
the
45
minutes
time
to
five
minutes.
So
that's
that
very
tremendous
improvement
in
the
speed
at
which
the
local
development
can
be
set
up.
But
are
we
satisfied
with
five
minutes?
F
No,
we
wanted
to
raise
the
bar
even
high,
so
this
is
what
we
do.
We
up
the
ante
and
we
do
ambitious
school
of
bootstrapping.
The
entire
local
development
of
github
code
base
to
10
seconds
was
it
possible?
Yes,
anything
is
possible,
so
this
is
something
that
I'll
go
in
the
next
line
on
how
we
did
it
so
we'd
achieve
the
10
seconds
by
using
something
called
pre-bills.
So
pre-builds
are
nothing
but
a
pools
of
code
spaces
that
are
created
and
are
fully
cloned
and
bootstrapped
already
and
ready
to
go.
F
And
now,
since
the
code
base
is
large,
we
also
the
code.
Spaces
also
provides
a
mechanism
to
control
the
amount
of
memory
that
is
used
by
code
spaces.
So
we
recommend
changing
from
16
gb
to
64gb,
because
the
size
of
the
code
base,
which
github.com
has
is
pretty
large
and
64gb,
is
what
we
felt
is
very
apt
for
our
use
cases.
F
F
Yes,
with
this,
we
come
to
the
end
of
this
session.
I
hope
you
are
able
to
learn
a
few
things
about
how
github
does
devops
and
how
we
were
able
to
achieve
the
lead
time
and
how
we
were
able
to
you
know,
get
faster
devops,
ci,
cds
done
and
deployments
are
streamlined
and
certain
optimizations,
as
well
as
making
sure
that
the
developer
has
a
cleaner
visibility
to
how
devops
is
or
deployments
happening
and
would
democratize
the
devops
by
using
deploy
trains
in
merged
queues.
F
B
Wow
that
was
amazing,
richard
I'm
not
sure
about
you
when
I
joined
github.
The
first
question
that
everyone
asked
across
is:
does
github
use
github.
How
does
github
use
devops?
Finally,
I
have
a
link
to
share
them
right,
I'm
tired
of
explaining
them
all
over.
I'm
glad
that
there
is
a
video
there
right,
bootstrapping,
a
fresh
environment.
B
B
Right
that
I
think
this
concludes
the
devops
track
for
day
two
hope
you
all
enjoyed.
There
are
a
couple
of
live
workshop
that
workshops
that
are
lined
up.
It's
all
listed
in
githubconstellation.com.
Please
do
have
a
look
and
have
fun
richard.
Do
you
want
to
remind
them
of
what's
happening
in
day?
Three,
absolutely.
C
There
are
panelists
who
will
be
discussing
the
future
of
hiring
and
digital
skilling,
and
we
will
have
a
few
workshops
on
how
students
and
teachers
are
using
github
enjoy
day
three
before
we
sign
off
a
quick
reminder
to
all
of
those
whose
tweets
were
featured
today.
Do
dm
the
github
india
twitter
channel
with
your
details
so
that
we
can
ship
you,
some
cool
swag,
hope
you
all
had
a
great
day
too.
We
surely
did
and
thank
you
all
for
joining
us
today
and
see
you
all
tomorrow.