►
From YouTube: It works in dev, test and staging, why not in Prod? 😡 How Trilio uses GitHub Actions #DemoDays
Description
2:04 - Program Start
3:49 - Overview
10:02 - Multi-Cloud Data Protection and Management
14:24 - A persona based view of the challenge
18:14 - TVK Runner for GitHub Actions
21:54 - live demo
GitHub Solutions Architect, Andrew Mccoy (Moose0621), is joined by Prashanto Kochavara and Ben Morrison of trilio.io to dig into Kubernetes, cloud-native deployment pain points and how they use GitHub Actions along with the TVK to solve issues when deploying to Production.
https://www.trilio.io/triliovault-for-kubernetes/
For upcoming Demo Days: https://www.linkedin.com/company/github
A
Hello:
everyone,
hello,
linkedin,
hello,
devs,
hello,
folks
on
linkedin
good
morning,
good
afternoon,
good
evening,
good
night,
always
nice
to
see
worldwide
representation.
Today,
today
we're
going
to
talk
about
a
particular
cloud,
late
native
deployment
pain
point
here
on
demo
days
and
to
help
me
with
this
topic,
I'd
like
to
introduce
you
to
moose,
who
is
a
field
solutions
engineer
over
here
at
github,
and
we
also
have
some
additional
special
guests
from
trillio.
A
I've
got
preshanto
and
ben
roshanto
is
the
director
of
product
over
at
trulia
and
then
is
the
solutions.
B
Awesome,
thank
you
so
much
aj
and
actually
yeah.
I
gotta
update
my
linkedin,
I'm
actually
now
on
our
advanced
security
team
over
here
at
github
as
a
field
architect,
so
I
got
gotta
go
and
update
that
so
happy
to
be
here
today
with
my
friends,
ben
and
pk.
B
Interestingly
enough,
I
got
to
give
a
shout
out
to
rob
and
dalton
menhall
over
at
trillio
for
helping
get
this
together
through
through
our
channels,
working
together
in
the
past,
so
been
got
connected
a
couple
months
ago,
just
kind
of
understanding.
You
know
what
what
trilio
could
do,
potentially
not
just
with
github
but
for
the
github
community
as
a
whole.
So
I'm
happy
to
have
you
join
them,
be
joining
them.
B
That's
why
we
do
it
live
right,
be
joined
by
them
today,
so
they
can
kind
of
show
you
what
they've
has
come
out
of
conversations.
They
built
a
really
awesome,
github
action,
they
think
is
going
to
provide
a
lot
of
value
to
the
community
here.
Pk.
D
The
moose
is
absolutely
right,
you
know
the
joint
collaboration
between
github
and
twilio
has
been
extremely
valuable.
You
know,
we
definitely
feel
we
have
built
something
really
intricate
over
here,
which
is
going
to
solve
the
challenges
that
we
experience.
You
know
in
a
cloud
native
delivery
model
and
ben,
and
I
are
going
to
be
super
happy
to
present
everything
you
know
show
you
all
a
demo
as
to
how
it
works.
You
know
what
everything
is
built
and
the
value
that
it's
providing
in
the
end.
D
Okay,
so
without
much
to
do
I'm
going
to
put
this
in
presentation
mode
and
we
can
get
started
so
from
an
agenda
perspective.
What
I'm
going
to
do
first,
is
I'm
going
to
talk
about
the
challenge
or
the
problem
that
we
are
solving
over
here?
Next,
we
will
get
into
the
cloud
native
data
protection
platform
of
trillio
and
we
look
at
the
solution
as
to
what
is
the
technology
behind
the
solution
to
the
challenge
and
how
it's
powering
it.
D
After
that,
we
will
provide
a
quick
introduction
to
the
tbk
github
action,
the
the
workflow
or
the
github
action
that
trillio
has
developed
and
then
ben
will
be
going
into
a
demo,
showing
you
exactly
how
everything
works
and
how
we
are
going
to
solve
that
pain
point
for
you
and
then
finally,
we'll
wrap
it
up
with
a
conclusion,
and
if
we
have
any
questions
we'll
try
to
answer
those
as
well.
D
Okay,
so
one
of
the
most
common
ci
cd
issues
that
we
hear
about
you
know
when
we
talk
to
our
customers,
is
that
you
know
when
they
are
developing
code.
You
know
everything
works
well
in
their
dev
environment,
they
move
it
into
their
test
environment.
They
are
able
to
test
properly,
and
you
know
everything
works
there
as
well.
Staging
wise.
You
know.
Just
before
launching
into
production
everything
looks
clean
and
when
you
launch
into
production
you
hit
a
bunch
of
different
issues
and
then
oftentimes.
D
You
end
up
looking
like
that
image
on
the
bottom
right.
You
are
thinking
about
hey.
How
did
this
happen?
Why
did
I
miss
this
and
when
you
actually
find
the
root
cause,
you
feel
that
hey?
This
is
something
that
I
should
have
figured
out
earlier
on,
or
something
that
I
could
have
stopped
from
failing
in
production
now,
one
of
the
biggest
reasons
why
this
ends
up
happening
is
because,
in
all
the
other
silos
that
we
have
of
you,
know
dev
test
staging
environments.
D
We
end
up
generally
testing
with
older
copies
of
the
data
or
you
know
it's
stale
data,
or
you
know
random
data,
which
is
not
a
true
representation
of
your
production
data
and,
as
a
result
of
that,
you
know,
issues
that
happen
generally
more
during
runtime
you're
not
able
to
address
those
or
surface
tools
before
launching
into
production.
C
D
To
keep
our
conversation
and
this
the
session
little
interactive,
we
have
a
poll
question
based
on
the
problem
statement
that
I've
just
described
and
what
we
want
you
to
do
is
use
the
linkedin
live
chat
to
type
in
your
answer.
So
the
question
here
is:
do
you
face
these
ci
cd?
Challenges
of
you
know,
passing
every
step
until
production,
but
failing
in
production
and
your
options
are?
Yes,
you
face
these
challenges,
but
you
do
not
have
a
solution.
D
Yes,
you
do
face
these
challenges,
but
you
have
a
solution
and
then
c
is
obviously
you
have
this
challenge,
but
it's
really
not
important
to
you
again.
We
are
looking
for
your
responses
in
the
linkedin
live
chat
and
just
enter
the
letter
corresponding
to
your
answer
and
aj.
I'm
gonna
look
to
you
because
I
cannot
chat.
You
know.
B
B
You
know
this
is
definitely
bringing
me
back
to
my
days
years
ago.
As
a
devops
engineer,
I
wish
yeah
definitely
face
a
lot
of
these
challenges.
This
misconfiguration
of
files
password
rolled,
got
updated.
There
was
control,
ms
in
the
configuration
popular
one
as
well.
Definitely.
D
Yes,
yes-
and
I
think
you
know,
the
cloud
native
journey
is
definitely
helping
solve
some
of
these
problems
and
you
know
we'll
definitely
be
kind
of
addressing
those
right
now.
B
Definitely
a
good
smattering
of
answers
here.
Looking
for
the
trend,
these
definitely
a
good
amount
of
bees.
D
This
is
someone
saying
sorry,
okay,
I
guess
the
general
consensus,
though,
is
that
this
is
definitely
a
problem,
and
you
know
you
may
be
trying
to
solve
it
in
different
ways
again
and
today
we
are
going
to
be
talking
about
how
you
solve
this.
In
a
cloud
native,
you
know,
kubernetes
native
environment.
D
You
know
making
sure
that
your
application
delivery
is
as
fast
as
application
development,
so
we'll
jump
into
talking
about
the
technology
that
is
trillio
trillio
want,
which
is
powering
the
solution
and
the
github
action
that
we
have
built
so
trilio
as
an
organization
you
know
was
founded
in
2013.
We
have.
We
are
already
the
leading
data
protection
data
management
solution,
for
you
know
openstack
and
radar,
virtualization
environments
and
now
vsphere
having
the
kubernetes
the
kubernetes
journey
as
well.
D
So
what
you
see
in
front
of
you
is
the
trillionvolt
management
console
and
the
way
we
have
built
our
product
is:
we've
made
it
completely
kubernetes
native.
So
as
long
as
you
have
a
kubernetes
qualified
environments
or
cncf
conformant
kubernetes
distribution,
trillio
will
work
out
of
the
box
in
it
and
whatever
we
are
showing
you
as
part
of
the
github
actions
that
we've
developed,
they
will
also
be
supported
without
any
additional
configuration
that
is
required.
D
Secondly,
what
we've
done
is
in
order
to
communicate
with
storage
or
your
application
volumes
or
your
persistent
volumes.
We
leverage
the
container
storage
interface
so
that
you
know
we
can
keep
that
data
layer
extremely
agnostic.
So
even
if
you're
dealing
with
heterogeneous
environments
throughout
your
you
know,
development,
your
test
clusters
or
your
staging
clusters.
As
I
mean,
if
they
are
completely
heterogeneous,
you
know
trillio
will
still
work
out
of
the
box
on
them
without
any
configuration
required.
D
And
obviously
we
know
everyone
is
following
a
you
know,
multi
cloud
or
a
hybrid
cloud
strategy,
so
you
will
be
you
know,
running
the
software
or
the
solution
in
multiple
places
and
trillio
has
you
covered
for
that
as
well
and
then?
Finally,
what
the
technology
supports
is
supporting
your.
We
are
completely
application
centric.
So
what
we
allow
you
to
do
is
capture
your
entire
namespaces.
D
If
you
want
to
capture
specific
helm,
applications
capture
operators
themselves,
like
you
know,
capture
the
operator,
objects,
the
operator,
custom
resources
and
the
application
that
the
operator
is
managing
all
of
that
as
one
unit
and
then
finally,
we
allow
you
to
capture
whatever
you
want
within
your
kubernetes
cluster
via
labels
as
well.
So,
overall,
you
know,
360
degrees
makes
it
a
really
nimble,
flexible
solution
working
in
any
of
your.
You
know
deployment
services
pipeline,
however,
it
may
be
constructed.
Trillio
can
support
you
in
there
now.
D
D
So
we
capture
all
the
metadata
the
data
volumes
and
we
store
it
in
that
central
report
now,
once
it
is
in
that
central
repo,
you
have
the
ability
to
take
that
same
data
and
restore
it
back
into
that
same
kubernetes
cluster.
Obviously,
you
can
overwrite
what
you
had
in
your
namespace,
but
you
can
also
deploy
it
into
another
namespace
as
well,
and
what
it
also
allows
you
to
do
is
take
that
application
that
you
captured
and
move
that
into
another
namespace
of
a
completely
separate
or
completely
different,
kubernetes
cluster.
D
So
now
you
can
visualize
that
this
method
of
doing
point
in
time
captures
and
point
in
time
restores
trillio
can.
Actually
you
know
between
your
production
environments
and
your
dev
and
test
environments.
We
can
easily
capture
your
running
application
and,
along
with
the
metadata
and
the
data
aspects
of
it,
bring
it
back
into
a
secondary
environment
where
you're
performing
your
testing
and
validations.
D
So
on
the
extreme
left
we
can
think
of
the
persona
of
lisa.
Lisa
is
the
developer.
Who
is
you
know,
80
of
the
time
writing
code,
building,
microservices
and
focused
on
you
know,
developing
the
application
and
her
main
challenge
again
is
to
ensure
that
she
can
test
with
production
data
so
that
she
can,
you
know,
enable
or
increase
the
success
of
launching,
or
you
know,
delivering
software
back
to
customers.
D
Now
lisa
works
very
tightly
with
brian
brian
is
the
sre.
He
works
with
lisa
to
take
the
microservices
that
she
has
built,
run
it
into
the
run
it
in
the
kubernetes
system
and
making
sure
that
it's
you
know
it
has
uptime.
It's
continuously
running.
You
know
from
a
monitoring
logging
perspective,
you
are
making
sure
that
it
is
providing
all
those
metrics
and
stats
and
again,
the
main
thing
that
brian
is
also
concerned
about
is
making
sure
that
you
know
when
he
launches
this
application
in
production.
D
It
is
successful
and
he
doesn't
have
to
keep
re-tweaking
and
rehashing
and
fixing
issues
that
could
have
been
caught
beforehand
and
as
we
move
from
the
development
side
to
the
upside,
we
have
rob.
Rob
is
also,
you
know,
think
of
him,
like
an
sre
from
the
op
standpoint,
who
is
building
all
these
clusters
for
the
dev
teams
to
play
in
and
develop
their
code
and
run
their
applications
in
now.
Rob's
intent
is
also
he's
looking
at
everything
at
a
macro
level
right.
D
You
know,
obviously,
if
any
kind
of
corruptions
or
security
breaches
happen
around
ransomware
recovering
from
those
those
are
the
different
activities
that
rob
is
looking
for
and
then
finally,
you
know
looking
or
going
deeper
into
the
upside.
We
have
jane
jane.
You
can
imagine
her
to
be
the
kind
of
like
the
ide
director
who
is
focused
on
cost
related
items
and
focused
on
business
continuity
because
of
which
migrations
of
applications
become
very
important
and
year
and
year.
D
So
overall,
this
is
how
the
personas,
within
a
kubernetes
system
interact
with
each
other
and
obviously
what
they
do.
Is
they
leverage
a
lot
of
infrastructure
as
code
right,
their
infrastructure
as
code
is,
you
know,
placed
within
the
git
repository
itself,
so
in
github,
and
they
not
only
have
you
know,
lisa's
development
code
over
there,
brian's
deployment
code
and
rob's
deployment
code
for
deploying
the
clusters,
but
also
the
application
and
how
it's
built.
A
D
Possible,
okay,
with
that,
I'm
going
to
start
talking
about
the
github
actions,
runner
that
trillio
has
built.
So
I'm
going
to
start
off
from
the
left
hand,
side
that
is
the
developer
laptop.
You
can
imagine
someone
writing
code
and
checking
it
into
github.
Now
within
github
you
have.
The
concepts
of
workflows.
D
D
What
that
runner
basically
does
is
once
you
check
in
code,
first,
a
backup
job
gets
triggered
and
as
part
of
that
backup
job,
the
tvk
runner
will
go
to
the
production
cluster,
which
is
the
source
cluster
capture,
everything
from
there.
So,
let's
suppose
you
had
a
mysql
database
running
in
production.
We
will
capture
all
the
metadata
and
the
data
objects,
but
get
it
into
the
centralized
repository.
D
D
So
what
happens
is
as
soon
as
your
code
has
checked
in
on
one
side,
you
are
running
your
integration
tests,
making
sure
that
you
know
all
the
code
can
be
packaged
into.
D
You
know
a
container
image,
and
while
that
is
happening,
you
are
also
going
fetching
your
production
data
and
bringing
that
into
your
test
environment
so
that
once
your
ci
tests
pass,
it's
ready
to
run
against
that
newer
copy
of
data
that
you
have
just
brought
in
okay
now
there
are
obviously
other
items
also
that
trillio
provides
as
features
so,
for
example,
you
know
if
you
need
to
anonymize
the
data
from
the
production
side
to
the
destination
side.
D
D
Lastly,
the
you
know
when
you
have
multiple
clusters,
it's
always
possible
that
you
are
running
different
storage
classes
or
different
storage
types
in
your
production
environment
versus
your
test
environment.
What
this
runner
also
allows
you
to
do
is
on
the
fly,
as
it
is
moving
that
production
data
from
the
source
side
into
the
destination
side.
We
also
allow
you
to
change
the
storage
class
on
the
flywheel
and
category
store.
D
D
Okay,
so
now
I'm
going
to
hand
it
over
to
ben
to
basically
put
the
rubber
to
the
road
and
whatever
I
have
spoken,
show
that
in
action
as
part
of
a
demo.
C
Can
you
see
my
screen
in
the
lens
terminal
up
here?
C
D
C
Good
to
go
sounds
good,
so
we're
going
to
do
a
demo
today,
showing
a
backup
and
a
restore
from
one
cluster
into
another
cluster
using
that
github
runner,
and
so
we
have
two
clusters
here.
Our
first
one
is
going
to
be
a
rancher
cluster
and
this
is
going
to
be
the
source
cluster
and
then,
secondly,
we
have
our
gcp
ocq
cluster.
This
is
ocp
running
on
gcp
for
our
target
or
destination
destination
cluster.
C
So,
first
and
foremost,
as
preshanto
said,
our
github
runner
is
running
in
our
source
cluster.
So
you
can
see
this
example
runner.
We
have
here
using
our
repository,
trilio
demo,
flux
demo
and
it
is
up
and
running
the
application.
We're
going
to
be
migrating
over
from
one
cluster
to
another.
Today
backing
up
and
restoring
it's
going
to
be
a
my
sequel
application.
C
C
We
also
have
here
a
pvc
associated
with
this
mysql
database,
we're
bound
and
good
to
go
and
you'll,
see
that
it's
using
the
storage
class
of
ebs,
storage,
class,
ebs,
sc
and
then
lastly,
we're
going
to
show
here
that
there
are
no
backups
that
have
been
taken
yet
of
this
application,
the
trulia
runner
once
it
starts
its
workflow,
is
going
to
take
a
backup
here
in
the
source
cluster
and
then
restore
it
into
our
destination
destination.
Cluster.
C
C
Now
switching
over
we're
in
github
here
and
we're
looking
at
that
trilio
demo
flux
demo,
repo,
where
our
workflow
is
sitting
to
use
that
github
trilio
runner,
and
so
we
have
a
few
different
files.
We're
going
to
look
at
here.
First,
we'll
look
at
this
source
key
config,
as
you
can
see
here,
it's
that
same
rke
cluster,
that
I
showed
you
before
for
the
source
cluster
and
we
just
have
our
keep
config
in
here.
So
the
runner
knows
which,
which
cluster
to
be
pulling
from.
C
C
And
then
we
also
have
our
environment
variables
here.
This
is
where
we're
going
to
see
what
target
we're
going
to
be
using
to
back
up
that
back
up
to
store
that
back
up
to
and
where
to
pull
that
back
up
from.
In
the
case
of
the
restore,
we
also
have
the
name
space
where
that
target
is
sitting
on
this
source
cluster
here
and
then
the
backup
namespace,
where
the
backup
is
going
to
occur
on
that
source
namespace,
which
is
that
bend
demo,
namespace
and
then
the
destination
cluster.
C
We're
gonna
have
this
configured
so
that
it
triggers
the
workflow
every
time
a
push
or
pull
request
happens
on
the
main
branch.
Only
we're
also
going
to
make
sure
for
the
case
of
this
demo,
especially
that
we
can
manually
execute
that
action
if
we
want
our
jobs
are
then
going
to
be
in
the
case
of
a
backup
job
first,
as
preshanto
alluded
to,
and
then
we're
going
to
do
a
restore
next.
C
These
environmental
variables
pull
that
from
our
dots,
enb
file
and
then
we're
going
to
create
the
manifest
for
our
restore-
and
one
thing
I
want
to
point
out
here-
is
this
storage
class
transform
since
we're
restoring
using
trilio
from
one
cluster
an
a
rancher
cluster
into
a
secondary
one?
A
ocp
cluster
running
on
gcp
they're,
two
completely
different
distributions.
C
C
C
C
So
here's
that
series
of
checks
that
we
talked
about
before
in
the
workflow
itself
making
sure
cute
cuddles
installed
flowing
the
environmental
variable.
C
And
then
now
is
going
to
kick
off,
creating
that
backup
plan
and
backup
now
preshanto,
as
I
mentioned
before,
the
the
backup
itself
in
this
demo
is
going
to
take
about
two
and
a
half
minutes.
Maybe
three
minutes
and
the
restore
itself
will
also
take
about
two
and
a
half
to
three
minutes
so
at
this
time,
if
there
are
any
questions
in
the
chat
or
anything
you
want
to
address
for
shanto
I'll,
let
you
answer
some
questions.
There.
D
Sure
awesome
man,
thank
you
for
driving
us
through
that
and
yeah.
We
can
see
that
the
backup
plan
and
the
backup
is
getting
created.
So
basically
this
would
mean
that
you
know
the
production
data
is
getting
captured
now
and
we
are
moving
that
into
that.
Centralized
repository.
B
I
just
want
that's
some
great
modularity
in
there
just
kind
of
thinking
how
the
community
can
use
that,
like
just
being
able.
I
love
that
anonymization
piece
you
were
talking
about,
because
that
was
always
a
big
struggle.
I
had
back
in
your
back
a
few
years
ago
now
at
this
point
with
kind
of
you
know,
you
don't
need
customer
data
or
any
kind
of
identifiable
information,
not
just
no
there's
some
other
regulation
around
that
too.
So
that's
that's.
B
A
really
big
help
as
far
as
making
sure
you're
you're
treating
data
as
it
needs
to
be
treated
too
through
your
workflows,
correct.
D
And
then
you
know,
that
is
definitely
one
intricate
piece
and,
as
we
were
talking
about
the
transforms
you
know,
storage
is
one
item
that
you
know.
Customers
would
definitely
want
to
massage
and
tweak
as
they're
moving
the
applications
around
and
then
you
know
these
transforms
are
so
powerful
that
anything
and
everything
within
your
yaml
file.
You
can
technically
tweak
that
as
part
of
the
overall
operation.
D
So
in
this
runner
that
we
have
created,
you
know
we
have
provided
some
injections
for
the
storage
class,
provided
some
integrations
for
you
know
changing
the
service
port
and
things
like
that,
but
users
can
actually
you
know
we
have
an
entire
api
library
of
additional
features
and
they
can
actually
use
that
entire
api
library
to
inject
any
kind
of
transform
any
additional
features
or
things
that
they
want
to
do
as
part
of
the
capture
and
restore
process.
C
C
C
D
Okay,
good,
that's
great
flowing
very
well,
and
now,
while
we
were
doing
this,
there
is
a
question
I
see
in
the
chat
window.
Do
you
have
protected
customer
data
in
your
production
data
that
you
are
copying
over
to
a
test
environment?
So
ukachi?
Yes?
D
So,
as
I
was
saying
earlier,
when
we
are
doing
the
capture,
we
have
this
feature
known
as
hooks
and
as
part
of
hooks,
you
can
actually
start
talking
to
the
application
directly
and
you
know
make
sure
that
when
you
are
doing
a
copy
of
that
application,
you
can
anonymize
a
particular
table
before
that
capture
happens.
So
if
there
are
any
security
and
compliance
reasons
where
you
cannot
bring
production
data
back
into
your
own
environment,
your
customer
sensitive
information
you
can
manage
and
handle
that
very
easily
with
tradio.
D
D
Yes,
I
mean
there
is
there
is
the
ability
on
you
know
you
could
technically
do
that,
but
the
the
point
that
we
are
trying
to
build
on
over
here
is
that
you
know
your
production
environment
is
continuously
changing.
There
is
new
data
that
gets
added
new
type
of
data,
new
data
forms
that
are
getting
added,
that
same
data
is
being
used
by
different
microservices
and
when
you
are
injecting
new
variables
into
your
code,
you
know,
maybe
it
may
be
a
floating
point.
Integer
double
pointed
here.
D
B
A
B
Sense,
it
could
be
also,
you
know,
situational
dependence.
Sometimes
you
can
create
synthetic
test
data,
but
it
I
it
sounds
like
this
is
a
really
great
way
to
continue
as
production
data
does
change,
making
sure
that
you're
continuing
to
test
against
that
that
those
same
data
sets
so
that
you
capture
those
air
defects.
Further
upstream
I
mean
that's,
it's
not
just
something
here,
that's
something
I
think
we
can
all
relate
with
there
couple
other
questions
I
can
check.
How
easy
is
this
to
deploy
from
the
chat
from
david.
D
B
Well,
that's
yeah!
That's
that's
turnkey!
I
like
that
another
one
from
audrius
what's
happening
when
production
debt
is
not
20
gigabytes,
but
a
lot
more
yeah
I've
dealt
with
some
customers.
Lately
they
told
me
that
you
know
some
models.
They
create
can
be
upwards
of
20
petabytes
of
data.
So.
D
Right
so
one
thing
that
we
do
is
you
know
we
provide
incremental
captures
as
well.
So
let's
say
you
did
this
so
the
first
time
you
got
your
production
data
in
and
all
your
incremental
testing
activities
can
be
based
on
the
incremental
captures,
which
will
be
overlaid
on
that
base
image
that
we
already
have.
D
So
you
know
that
way.
You
can
continuously
ensure
that
your
data
is,
you
know
current,
and
you
know
you're
testing,
even
however
big
your
data
size
may
get.
You
are
still
able
to
test
with
that,
and
this
is
going
to
be
a
key
feature
that
we
provide
as
part
of
the
product
and
a
few
upcoming
months.
C
If
I
can
chime
in
here
now
as
I'll
show
you
about
restore
it
completed,
do
you
want
me
to
go
ahead
and
finish
up
and
show
the
pods
running
the
new
cluster
yeah?
So
let's
do
that
great,
so
here,
first
off
we're
in
the
source
cluster
again
that
rancher
cluster
in
the
beginning,
I'm
just
showing
here
that
now
we
have
that
runner
backup
that
has
been
completed.
It
was
just
done
the
past
couple
minutes
here.
It
was
a
namespace
backup
and
restore
that's
how
we
chose
to
do
it.
C
This
time
was
a
name,
space,
backup
and
restore,
and
we're
available
and
jessica
said
before
there
are
incremental
backups.
You
can
have
to
save
space
on
your
storage
target,
but
here
we
did
a
full
backup
now
going
into
our
destination
cluster.
C
We
saw
that
there
were
no
mice,
equal
objects
running
in
this
cluster
before
go
ahead
and
take
another
look,
and
now
you
can
see
all
four
of
those
pods
are
up
and
running
our
three
front,
end
pods
and
our
back
end,
my
sequel.
When
I
have
one
good
to
go,
just
created
the
past
three
minutes
there
we
have
our
two
services,
our
two
deployments
and
our
two
replica
sets.
C
We
also
have
that
persistent
volume
fade
in
here
bound
good
to
go
just
create
about
five
minutes
ago,
and
here
you'll
see,
as
I
point
out
before,
we
switched
from
using
one
c
one
storage
class
in
the
other
cluster
to
now
using
the
storage
class
that
is
used
in
this
cluster,
that
standard
csi
and
once
it
was
all
in
that
restore
manifest,
that
transform
was
was
inputted
into
there.
C
There
was
nothing
you
saw
that
we
had
to
do
manually
on
our
end
and
then,
lastly,
just
to
show
you
we
have
the
restore
itself
up
here.
Runner
restore
just
as
it
was
named
in
the
in
the
workflow,
is
completed
just
done
in
the
past
couple
minutes
here,
and
it
was
that
namespace
restored
and
with
that,
I'm
I'm
good
with
the
demo
and
I'll
toss
it
back
over
to
you.
Prashantha
awesome.
D
Thank
you
so
much
ben.
I
think
you
know
that
was
extremely
valuable.
You
know
especially
doing
it
live.
The
demogods
were
definitely
with
us,
and
you
know
I
think
it
was
presented
in
a
very
succinct
fashion.
You
know
showing
exactly
what
we
spoke
about
in
theory
and
you
corroborating
it
in
practice.
D
You
know,
I'm
pretty
sure
it's
going
to
be
very
valuable
for
our
audience
here
we
can
move
back
to
my
slides
now
and
put
this
back
into
presentation
mode.
So,
in
conclusion,
what
what
we
want
to
say
is
that
you
know
we
all
know.
Kubernetes
is
equal
to
all
your
micro
services
plus
infrastructure
escort
right.
You
want
to
make
sure
that
you're
managing
your
kubernetes
environments
via
infrastructure
score
and
not
manually,
making
changes,
that's
the
recipe
for
success
and
that's
how
you
will
be
able
to
streamline
sort
of
operations.
D
Kubernetes
focuses
on
application,
development
and
application
delivery,
and
you
need
to
ensure
that
you're
using
kubernetes
in
this
intelligent
way.
You
know
how
we
were
showing
testing
the
production
data
automating,
the
entire
workflows.
You
need
to
employ
deploy
these
practices
within
your
own
environment
to
get
the
real
benefits
of
a
cloud
native
or
kubernetes
environment.
D
We've
spoken
about
trailer
world
being
a
purpose-built
solution,
managing
your
data
and
your
cloud
native
applications
together
as
one
unit
github
actions.
As
we
all
know,
our
purpose
build
tasks
supporting
you
know,
ci
cd
pipelines
and
what
trillio
and
github
actions
together
are
providing
you
are:
is
the
capability
to
continually
test
with
production
data,
to
increase
the
software
quality
and
delivery
process
for
your
customers?
D
We
are
all
aware
of
you
know,
backup
and
recovery
being
used
for
day
two
activities.
You
know
disaster
recovery,
migrations
and
things
like
that
now
today
we
have
shown
you
how
same
tool.
You
know
why
our
infrastructures,
code,
environment,
github
actions
can
be
used
as
part
of
your
ci
cd
or
your
devops
processes.
D
What
we
have
shown
you
today
can
obviously
be
further
extended.
You
know
we
have
been
hearing
about
devsecops.
You
can
think
about
dev
resiliency
ops.
As
you
are
deploying
your
applications
in
environments.
You
want
to
make
sure
that
they
are
protected
as
they
are
getting
deployed
as
well.
So
with
the
power
of
infrastructure
as
code,
you
know
power
of
trillio
and
github
actions.
D
So
with
that,
we
conclude
our
presentation.
You
know
we
definitely
thank
you
all
for
your
time
and
paying
attention
to
what
trilio
had
to
present.
You
know
we
will
be
available
to
answer
questions
right
after
and
if
there
are
any
questions,
even
after
the
session
you
can
reach
out
to
us,
my
name
is
shanto.
My
last
name
is
cochiwara,
so
prashant
without
coach
of
our
I
o
and
then
dot
polishing
at
trillium
io
as
well.
B
Looks
like
we
do
have
a
question
for
you,
pk
and
ben
I'm
ravi.
How
do
we
ensure
application
consistency?
Is
that
via
done
via
hux,.
D
Yes,
so
via
hooks
is
the
exact
way
to
get
application
consistent,
backups.
What
we
have
done
ourselves
is.
We
have
qualified
a
bunch
of
different
databases
like
mysql
postgres.
You
know
cassandra,
you
name
it
all.
These
different
databases,
they've
been
qualified
validated
and
we
have
provided
the
exact
ammo
files
and
everything
that
you
would
need
to
use
directly
in
get
books,
which
is
our
public
documentation.
B
Oh
perfect,
so,
just
like
you
know,
an
action
marketplace
will
do
where
yeah
excuse
me,
where
they'll
likely
be
able
to
find
this
can
have
action,
though
they
can
pull
that
information
together
for
themselves.
D
That
is
good
yeah,
so
one
of
the
things
we
are
doing
with
whatever
ben
has
provided
and
spoken
about
and
the
demo
he
did.
We
will
be
working
with.
You
know
the
github
team
to
host
it
as
a
solution
that
customers
can
leverage
directly
from
github.
We
will
host
it
internally
as
well
and
we
will
continually
be
you
know,
updating
it
with
additional
features
and
providing
a
own
custom
module
that
we
built.
But
you
know,
obviously
you
know,
developers
and
sres
are
free
to
build
their
own
modules,
leveraging
the
trillium
solution.
B
Now
it's
an
open,
I'm
guessing
the
actions
in
open
source
repositories
any
if
anybody
makes
an
enhancement
or
interesting
things
like
that.
Pull
requests
are
welcome
on
that
exactly.
D
We
will
I
I
we
have
it
right
now
in
a
private
repository,
but
we
will
be
exposing
that
we
will
be
making
some
noise
around
that
evangelizing
that
but,
yes,
the
end
goal
is
to
make
sure
that
it's
an
open
source
model
so
that
you
know
we
provide
the
starting
point
and
our
users
can
keep
enhancing
that,
and
you
know
have
that
community
involvement
to
make
sure
that
life
is
simple
and
easy.
B
Awesome
sounds
great,
don't
see
any
more
questions
in
there.
No,
this
has
been
great,
definitely
been
enjoying
working
with
you,
the
team
so
far
today,
and
definitely
this
this
solves
a
myriad
of
questions.
You
know
I
I
now
deal
primarily
with
our
advanced
security
stuff,
but
really
helping
folks
as
they're
rolling
out
get
up
actions.
You
know
this
is
really
that
next
generation
level
of
problem
folks
are
are
now
and
now
they
have
a
best
and
bright
solution
to
do
that.
Right,
native
on
github
actions,
this
has
been
great
yeah,
absolutely.
D
Thank
you.
So
thank
you
for
the
kind
words
and
you
know
I'm
pretty
sure
you
know
there
will
be
additional
innovative
methodologies
that
you
know
we'll
all
come
up
with
earlier
we'll
be
coming
up
with,
and
you
know
we'll
be
more
than
happy
to
work
with.
You
know
the
excellent
team
at
github
and
you
know,
add
more
value
to
you,
know
the
ide
world
in
general.