►
From YouTube: TAG General Meeting - 2022-11-16
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
B
Hello,
hello,
I'm
tiambo,
and
in
the
and
the
fog
we
are
maintenance
of
covella
team.
We
hope
to
present
something
about
covella
in
in
this
meeting
and
we
are
in
China
due
to
the
time
zone.
We
are
zero
in
zero
o'clock
here.
It's
it's
it's
late
at
night.
So
would
you
mind
that
led
us
to
present?
First.
C
D
C
E
It's
20
minutes
to
30
minutes.
C
C
E
I
have
another
question
for
the
meeting.
Will
this
meeting
be
be
recorded
because,
if
possible,
we
would
like
to
make
some
recordings
for
the
presentation
so
that
we
can
share
it
to
the
other
community
members.
B
Okay,
we
all
here
now
and
do
we
still
need
to
wait.
Some
people.
F
B
A
A
F
Okay,
then,
let's
start
yes,
hello.
Everyone
welcome
to
today's
tech
meeting
I.
Think
somebody
already
shared
the
meeting.
The
meeting
notes
on
the
on
the
on
the
chat
and
I
think
we
have
some
kind
of
a
perfect
agenda
today.
So
we
have
two
presentations,
one
from
QC
Taylor,
the
second
one
from
you
pillow.
F
F
So,
let's
start
I
think
the
first
one
was
qc10.
So
the
stage
is
yours.
B
Very
much
appreciated
and
in
that
will
present
represent
us.
Let's.
E
Yes,
yes,
okay!
So
let's
start
hello,
everyone.
My
name
is
indah
and
I
am
one
of
the
maintainers
in
cubela,
and
today,
I
will
give
a
short
presentation
for
kubella
and
I
will
introduce
the
features
of
Google
and
what
kubera
can
do
and
some
basic
concepts
of
kubella
and
hope.
It
will
be
clear
for
everyone
to
to
know
that.
E
Okay,
so
this
presentation
starts
with
a
very
simple
question,
is
what
is
kubella
so
in
brief,
cubela
is
a
modern
software
platform.
That's
targeting
ads,
delivering
and
operating
applications
across
today's
hybrid
multi-cloud
environments
and
to
make
it
easier,
faster
and
more
reliable.
There
are
several
features
that
kubela
has
to
make
the
application
delivery
easier.
The
first
one
is
that
it
is
infrastructure
agnostic
and,
as
you
can
see
in
this
figure,
that
covella
did
the
application,
delivery
and
the
destination
at
the
destination
of
the
delivery
can
be
various
various
types.
E
For
example,
the
resources
can
be
dispatched
to
kubernetes
multi-clusters,
and
it
is
also
possible
to
deploy
cloud
services
on
cloud
platforms
such
as
Microsoft,
Azure
or
Alibaba
Cloud.
Besides,
it
is
even
more
possible
to
to
add
some
resources
on
edge
devices
with
the
integration
with
other
open
source
projects
such
as
the
open
yard
or
Coupe
Edge,
and
the
second
main
feature
for
kubera
is
that
it
is
equivalent
platform
and
the
coval
system
is
highly
programmable
at
and
extensible,
it
is
achieved
by
leveraging
a
very
powerful
configuration
language
called
coulomb.
E
So,
with
this
configuration
language,
we
can
make
it
possible
to
make
abstractions
to
upper
layer
resources
and
organize
them
into
an
application,
and
for
other
for
users
that
use
kubella.
They
can
easily
integrate
their
own
capabilities
through
the
equivalent
add-on
and
write
some
simple
queue.
Scripts
to
integrate
those
open
source
projects
into
Google
system
as
well
such
such
as
Prometheus,
is
to
terraform
cross
player
and
many
others,
and
the
third
main
feature
is
that
the
covala
things
are
all
application.
Centrics,
we
developed
a
lot
of
features
to
make
the
application
delivery
more
easier.
E
For
example,
there
is
a
in
there
is
a
covala
workflow
inside
the
application
that
can
organize
the
details
of
the
delivery
process,
and
we
have
also
other
mechanisms
such
as
triggers
and
UI
interfaces,
which
helps
us
to
integrate
with
other
continuous
integration
tools,
such
as
Jenkins
and
other
CI
pipelines,
and
also
we
care
about
the
data
operating
part.
So
it
is
also
possible
to
leverage
the
observability
services
on
cloud
providers
and
use
them
to
monitor
the
applications
in
kubera
system.
So
here
is
a
overview
for
the
Google
part
and
let's
go
into
some
more
details.
E
Okay,
so
the
intuition
of
kubella
is
to
solve
some
problems
in
for
users
using
Cloud
native
infrastructure.
As
far
as
we
know
that,
with
the
first
development
of
the
cloud
native
Community,
today,
more
and
more
infrastructure
capabilities
are
exposed
to
upper
layer
application
developers,
although
it
makes
it
very
powerful
to
defer
to
to
design
their
customized
application
platforms,
but
for
developers
who
are
mainly
focus
on
the
application
logic,
they
need
to
care
more
about
the
details
of
the
underlying
infrastructure.
E
So
there
is
lots
of
details
for
the
infrastructure
to
know
for
application
developers,
and
this
will
prevent
application
developers
to
focus
on
their
own
business
logic
and
they
need
to
learn
underlying
knowledge
of
the
infrastructure.
So
the
complexity
is
becoming
more
and
more
critical
for
application
developers,
and
another
simple
example
is
that,
when
the
underlying
infrastructure
changes,
the
application
developer
needs
to
know
how
to
tackle
these
changes.
E
For
example,
if
the
underlying
infrastructure
is
the
kubernetes
with
version
1.21,
the
application
develop
part
might
need
to
use
the
real
one
better
one
Ingress
on
this
infrastructure
and
make
its
application
to
be
accessible.
But
if
the
underlying
infrastructure
upgraded
to
1.22,
where
the
Ingress
version
has
changed
to
version
one.
So
at
this
time
the
application
developer
needs
to
change
its
Ingress
to
to
more
latest
version
and
then
make
redeployments
of
this
result
resources.
E
So
that
means
the
application
developer
needs
to
know
how
to
handle
the
changes
of
the
underlying
infrastructure,
and
it
is
really
complex
for
them
to
know.
So
how
can
we
make
this
application
delivery
process
more
easier?
So
here
is
the
open
application
model
which
we
have
defined
like
two
or
three
years
ago,
and
the
kubella
is
one
of
the
implementation
for
this
model.
E
So,
with
the
open
application
model,
we
Define
a
unified
application
specification
which
provides
interfaces
for
application
developers
to
deploy
their
application
and
from
the
left
side
of
these
slides,
you
can
see
that
there
is
a
simple
application.
Spec
and
some
of
the
web
services
are
included
in
this
application,
and
there
are
some
scalar
and
Gateway
traits
which
provide
describes
how
this
web
service
should
be
deployed.
For
example,
we
might
want
three
replicas
and
expose
the
port
of
98.98,
but
if
you
want
to
deploy
the
resources
in
kubernetes
manifests
directly
on
the
right
side.
E
This
is
a
major
goal
for
open
application
model
to
do
and
under
the
application
there
are
abstractions
to
the
underlying
infrastructure,
and
here
we
have
the
qlan
language,
which
provides
some
templates
to
do
the
abstraction
job
and
on
the
right
side,
you
can
see
that
there
is
a
very
simple
abstraction
to
the
underlying
deployments,
which
expose
limited
parameters
for
users.
So
if
users
wants
to
use
such
workload
in
their
application,
they
only
need
to
specify
the
image
fields
and
then
the
underlying
deployment
will
be
automatically
rendered
and
then
dispatched
to
the
cluster.
E
So
that's
the
application
model
how
it's
work
with
the
component.
So
with
the
qlan
language,
it
is
possible
for
users
to
make
arbitrary
extensions
and
to
connect
the
application
system
with
the
underlying
CRT
system.
Okay.
So
apart
from
the
component
part,
we
also
have
another
thing
is
called
trade.
The
trade
is
to
this
used
to
describe
some
auxiliary,
auxiliary
things
that
helps
the
components
to
work.
E
For
example,
as
we
said,
there
is
a
scalar
trade
and
the
Gateway
trade,
which
defines
how
we
scale
out
the
replicas
of
the
deployment
and
how
we
expose
these
services
to
outside.
So
the
design
of
component
and
trait
is
actually
a
separation
of
concerns.
So,
for
example,
if
you
use
the
scalar
trade,
although
it
can
patch
the
replica
fields
in
deployment,
it
is
actually
capable
to
modify
virus
types
of
workloads
such
as
staple
set
or
even
other
customized
workloads
such
as
the
open
Cruise
clone
set.
E
So,
with
this
separation
difference,
different
operation
features
can
be
divided
in
into
different
parts,
and
the
users
can
only
focus
on
what
they
need
to
know
and
so
that
we
can
reduce
the
knowledge.
Application
developer
needs
to
know
when
they
want
to
deploy
their
application,
and
here
is
a
very
detailed
example
for
the
Gateway
trade
and,
as
you
can
see,
that
there
is
limited
parameters
exposed
to
application
developers.
E
One
of
the
part
is
what
we
commonly
said
is
the
application
developer,
which
is
indicated
as
the
end
users
in
these
slides,
and
these
users
are
mainly
directly
interact
with
the
covella
application
and
they
write
simple
application
manifests
to
specify
how
they
want
to
deploy
their
application,
and
there
is
another
part
of
people
that
focusing
on
building
platforms
inside
the
company,
for
example,
they
want
to
integrate
with
other
other
capabilities
such
as
service
mesh.
E
They
want
to
integrate
the
issue
or
other
Advanced
workloads
such
as
the
open
Cruise,
so
they
will
use
the
definition.
As
we
said,
the
component
definition
and
trade
definition.
They
develop
those
skill
loan
based
definitions
and
integrate
them
into
the
equivalent
system,
so
that
the
Google
system
can
be
extended
and
again
more
powerful
capabilities,
leveraging
the
open
source
projects
and
also
with
the
with
the
component
parts.
There
is
not
only
kubernetes,
resources
can
be
defined
in
the
in
the
application
component.
E
For
example,
you
can
set
up
a
relational
database
database
on
Alibaba
cloud,
with
the
use
of
some
types
of
components
and,
in
addition
to
the
hybrid
environment,
we
can
also
to
make
the
application
to
develop,
to
be
deployed
into
multi-clusters
and
the
multi-clusters
can
be
from
various
sources
such
as
the
on-premise
kubernetes
or
some
kubernetes
provided
by
the
cloud
providers,
and
here
is
how
we
specify
the
destination
of
application.
E
Okay,
after
that,
if
you
want
to
integrate
more
capabilities
in
your
system,
there
is
the
kubella
add-on.
The
cookware
add-on
includes
the
operator
and
the
OEM
definitions,
so
the
operator
are
provided
by
the
communities
such
as
the
Flag
City
operator
or
clickhouse
operator,
and
they
are
embedded
in
the
Google
application
as
well,
and
these
add-ons
also
carries
the
OM
definition.
E
So
after
you
installed
the
flux,
for
example,
if
you
have
stock
installed
the
flux,
CD
controller
in
your
kubernetes
cluster,
so
what
you
need
to
do
to
let
your
use
application
developers
to
use
plug
CD
is
that
you
can
install
some
OEM
definitions,
for
example
the
helm
definitions.
So
that's
the
application
developer
will
be
able
to
dispatch
Helm
charts
in
their
clusters.
So
the
OEM
definition
is
the
bridge
that
connects
the
covella
system
with
other
open
source
projects,
and
we
already
have
over
50
add-ons
contributed
in
the
community
now.
E
So,
if
you
want
to
directly
leverage
some
existing
capabilities,
you
can
just
use
some
simple
commands,
such
as
Bella
add-on
label,
and
set
up
your
development
environments
or
some
production,
environment,
okay
and
the
application
delivery.
Parts.
If
you
have
installed
such
such
add-ons,
it
will
be
able
to
use
those
capabilities
inside
your
application.
For
example,
you
can
use
the
terraform
part
to
deploy
Cloud
resources
and
there
is
also
Helm
charts
components
that
provided
by
the
Flag
City
all
the
other
CD
parts,
and
there
are
also
some
steel
parts
and
open
cruise
part.
E
So
you
can
make
customized
add-ons
and
install
add-ons
in
your
system
to
integrate
image,
arbitrary
integration
with
open
source
projects,
and
so
after
we
have
defined
what
a
application
should
contains.
A
more
detailed
question
for
Cuba
users
is
how
to
make
fine
control
for
its
components.
For
example,
some
of
the
components
might
have
dependencies
inside
so
we
might
need
to
deploy
those
component
resources
in
some
specific
order.
So
that's
how
covela
workflow
comes
in
covella.
E
E
E
So
if
the
application
is
not
a
do
not
need
to
use
some
resource
anymore
after
upgraded,
then
the
not
not
in
use
resource
will
be
deleted
and
recycled,
and
there
are
Advanced
policies
that
could
customize
those
behaviors
as
well,
and
also
we
have
application
Version
Control
that
can
to
do
some
rollback
and
to
do
the
history
record.
History
version
recording
and
also
it
is
possible
to
leverage
some
command
line
tools
to
inspect
the
different
version,
different
differences
between
different
application
revisions.
E
And
finally,
we
also
have
the
observability
as
covella's
first
class
citizen.
So
there
is
a
bunch
of
tools
that
could
help
Google
users
to
observe
their
applications.
For
example
in
the
command
line.
You
can
use
the
command
line
tools
to
do
the
interactive
status
in
Spec,
and
also
we
have
a
fancy
UI
interface.
If
you
want
to
interact
with
the
browser
instead
of
instead
of
the
command
line
and
on
the
browser,
you
can
see
the
managed
resources
under
the
application
and
they
are
also
interactive
ways
for
you
to
dig
into
the
details
of
the
resource.
E
If
you
really
need
to
do
that,
and
also
we
can
leverage
some
existing
open
source
projects
to
help,
we
build
a
very
functional
observability
platform.
For
example,
we
can
integrate
the
promessive
server
to
provide
metrics,
Observer,
metrics
monitoring
and
also
we
can
have
the
lucky
part
to
collect
the
logs
from
the
application
and
Export
those
things
to
the
graph
in
our
dashboard
and
provides
visualizations
for
the
application.
E
E
Fine
tuning
and
also
there
are
load
tests
that
can
measure
the
bottleneck
of
the
kubella
system,
and
we
have
made
some
statistics
under
different
environment
and
there
are
different
different
fields:
adopters
in
Cuba,
for
example,
they
are
commercial
Banks
and
car
manufacturers
that
use
Cuba
to
build
platforms,
and
there
are
also
game
companies
and
Cloud
providers
that
embed
kubela
as
part
of
their
services
and
also
the
equivalent
Community
is
increased,
is
a
very
active
now
and
there
are
over
200
contributors
from
various
countries,
and
thousands
of
issues
have
been
raised
by
far
and
most
of
them
has
been
solved
and
we
also
hold
bi-weekly
Community
meetings
and
upload
the
company
meeting
on
YouTube
as
well.
E
F
Okay,
so
would
it
be
okay
for
you
if
we
do
the
other
questions
station
more
or
less
indonesynchronous
way
to
to
editing
in
the
Target
delivery,
Channel
or
whatever,
because
we
have
to
proceed
with
the
second
presentation.
E
Okay,
okay,
so
I
will
stop
my
sharing
here.
D
Oh
yes,
hello,
I,
assume
that
I
should
take
over
now
right.
Yes,
you
see
second
presentation:
okay,
let's
first
share
my
screen.
D
Yeah,
that's
kind
of
what
it's
about,
partly
so,
let's
let
me
first
introduce
myself.
My
name
is
Alexander
block
I'm,
a
software
developer,
devops
guy
and
open
source
fan
and
I've
been
working
on
a
tools
the
last
few
years,
which
is
concentrating
on
yeah,
deploying
stuff
in
a
Mana,
manageable
fashion,
with
multi-cluster
multi-environment
support,
templating
and
so
on.
D
In
the
last
weeks,
I
created
a
delivery
scenario
based
on
that
tool.
By
the
way
I
didn't
mention
the
Tool's
name.
It's
close
video
clucityl.io.
If
you
want
to
look
it
up,
it's
also
on
GitHub,
of
course,
so
I
created
the
delivery
delivery
scenario
that
tries
to
kind
of
showcase
how
it
works
and
how
it
differs
between
other
projects,
for
example,
customize
or
Helm
or
flux,
and
so
on.
D
D
So,
as
you
can
see,
it's
the
Potato
Head
demos
that
I'm
using
having
a
delivery
scenario,
let
me
enter
the
presentation
mode,
so
we
go.
D
So
this
is
the
Potato
Head
project
having
a
subfolder
clue
CDL
now
inside
the
delivery
folder,
and
it
defines
a
closer
DL
deployment
project
which
basically
starts
with
a
project
definition
.yaml.
By
the
way,
can
you
also
see
these
Zoom
overlays
here
or
is
it
just
me?
We
can
see
it.
D
F
D
Me
do
this.
Let
me
do
this,
this
should
work,
so
the
idea
is
clusteria
works
on
the
base
of
targets,
so
maybe
first
closely
is
a
CLI
tool,
so
you
have
a
tool
to
control
your
deployments
and
closed.
Ctl
is
a
way
to
structure
and
Define
your
deployments.
A
deployment
means
it's,
of
course,
a
bunch
of
manifests,
it's
a
bunch
of
projects,
files
or
closely
deployment
files
and
templating.
D
Okay,
okay,
so,
instead
of
instead
of
applying,
manifests
to
your
current
Cube
context
or
specifying
which
Cube
context
you
want
to
deploy
to,
the
idea
is
that
you
define
a
Target
that
defines
all
this.
So,
for
example,
we
can
say
we
have
a
Target,
we
see
named
test
and
we
Define
that
it
should
go
through
this
context.
D
We
also
defined
some
entry
point
configuration
called
arcs
arguments
and
we
can
have
multiple
of
these
targets
and
the
idea
is
that
whenever
you
do
zcli
invocations,
you
work
based
on
targets,
so
you
say:
I
want
to
deploy
to
test
I
want
to
deploy
to
prod,
instead
of
saying
I
want
to
deploy.
These
manifests
to
this
context.
With
this
configuration
and
stuff
like
that,
so
I
said
this
is
the
entry
point
and
when
I
later
deploy
stuff,
it
uses
this
as
the
entry
configuration.
D
Where
to
continue,
the
next
thing
is
that
you
have
a
deployment
yaml,
please
ignore
most
of
that
for
let's
for
the
for
the
delivery
scenario,
Implement
a
lot
of
stuff.
What's
important
for
us
right
now
is
the
list
of
deployments.
A
deployment
in
this
case
is
not
a
kubernetes
deployment.
It's
just
like
a
deployment
sub
project
or
whatever,
so
it
just
means
this
folder
should
be
deployed.
So,
for
example,
we
have
the
registry
secrets.
D
Okay,
that's
also
a
bad
example
to
continue.
Let's
continue
with
the
potato
services.
So
what
we
have
right
now
is
it's
an
include,
meaning
we
want
to
include
a
sub
deployment.
A
sub
deployment
again
contains
the
deployment
yaml
and
what
this
does
is
it
defines
multiple
deployment
items.
A
deployment
item
can
be
a
customized
deployment
or
internally,
it
could
also
be
account
deployment
or
it
could
just
be
a
folder
consisting
of
manifests
as
an
example,
a
very
simple
one.
We
have
a
customized
deployment
here,
which
is
the
namespace.
D
Actually
it's
just
a
bunch
of
manifests
in
this
case,
so
it's
just
a
simple,
manifest
quiz
templating
and
that's
where
the
target
definitions
come
in
play,
so
we
are
always
assuming
that
we
are
deploying
to
a
Target.
That
means
we
have
some
entry
point
variables
to
be
used,
for
example,
the
target
which
has
the
name,
and
we
also
have
the
arguments
that
were
provided
to
this
Target
and
a
few
other
things
So.
D
Based
on
that,
we
include
multiple
such
deployment
items,
let's
say
what's
below
misc
some
conflict
map
first,
so
the
the
delivery
scenario
is
based
on
the
customized
scenario.
That
can
be
found
in
the
same
project.
What
I
did
is
I
used
all
these
manifests
and
kind
of
tried
to
restructure
them
in
a
way.
I
would
do
it
with
clue
scale.
So,
for
example,
not
have
everything
in
one
folder,
but
instead
kind
of
foreign.
D
In
one
folder,
everything
that
is
entry
related
is
in
another
folder
and
so
on.
D
So
let's
use
this
one
as
an
example,
so
we
have
the
entry,
so
the
deployment
project
defines
we
want
to
deploy
this
deployment
item,
which
is
the
entry
in
this
case.
It's
a
customization,
it's
just
plain
customized,
as
you're
used
to
it,
with
the
difference
that
I
could
use
templating
here,
I'm
going
to
explain
templating
a
little
bit
later,
so
it
just
adds
a
few
resources,
for
example
the
deployment
entry,
which
is
really
just
an
exact,
nearly
exact
copy
of
the
deployment
I
found
in
the
customized
deployment.
D
We
see
difference
that
I'm
using
templating
here
for
simple
stuff,
so
that
that's
basically
how
it's
structured,
you
can
do
it
as
complex
as
you
want,
or
as
simple
as
you
want.
D
D
So
that's
the
basic
project
structure
I
realized
that
I'm
a
little
bit
unorganized
right
now,
I
hope
I'm,
not
opening
too
much
I
hope
that
stuff
is
clear
at
the
momentar.
Are
there
any
questions
so
far?
D
D
Okay,
let
me
start
from
scratch
here,
because
I
already
deployed
everything
just
deleting
some
namespaces
so
that
we
can
show
it.
So
the
idea
is,
you
just
enter
the
the
project
where's,
the
closest
located
and
just
based
on
that
project.
That
includes
the
yaml
and
all
the
deployment
sub
deployments
and
deployment
items.
You
basically
have
everything
that
you
need
to
deploy
it,
so
what
you
can
do
is
closely
out
deploy
which
Target
that's
what
I
explained.
D
D
It
shows
errors
because
your
namespaces
are
not
created
yet,
but
if
I
actually
do
the
deployment
now
the
namespaces
will
be
created
and
the
errors
will
be
gone
so
I'm.
It's
asking
me
if
I
really
want
to
do
that.
I'm
saying
yes,
now
that
I
said
yes,
stuff
is
being
deployed,
and
in
canine
s
we
should
see
that
everything
is
starting
up.
So
what
I
did
now
is
I
deployed
the
test.
Target
I
can
do
the
same
thing.
We
see
products
Target
and
now
maybe
I
should
explain
where
the
two
Targets
differ.
D
So
maybe
I
first
deployed
so
that
you
see
that
stuff
is
actually
being
deployed
twice.
It's
deployed
twice
in
two
different
namespaces
and
now
the
magic
is
that
inside
your
deployment,
you
are
using.
The
wait,
am
I
in
the
wrong
project,
not
great.
D
So
you're
using
the
closerial
project
with
the
target
of
the
entry
point
and
it
defines
some
entry
point
variables
you
can
do
whatever
you
want
here.
It's
it's
just
plain
yaml
that
you
can
Define
here
then
inside
C
deployment.
We
include
some
variable
files
based
on
the
entry
point
configurations
that
we
provided.
D
Some
stuff
is
coming,
for
example,
let's
say
the
replicas
all
set
to
1
by
default.
We
have
Lex
by
default
some
other
stuff,
and
then
we,
based
on
the
entry
point
configuration
we
add,
we
load
another
configuration
which
is
also
based
on
the
templating
that
we
use
here.
So
in
this
case
we
would
use
config,
slash
non-prot,
for
example,
if
we
deploy
to
test
or
prod
if
we
would
deploy
it
to
to
protein.
D
D
What
happens
is
first
one?
This
one
is
loaded
then,
based
on
that
this
one
is
loaded
overriding
or
merging
everything
from
this
one
into
the
first
one,
giving
a
completely
new
templating
context.
So
you
can
repeat
that,
as
often
as
you
want
and
everything
that
is
Zen
included
as
deployment
sub
deployments
or
deployment
items
can
then
use
a
templating
context.
D
So
as
an
example,
we
have
defined
the
replicas
here.
That
means
because
we
have
loaded
it
in
the
root
deployment
project.
We
can
also
use
it
in
all
sub
deployments,
including
the
actual
deployment
items.
So,
for
example,
we
can
use
the
replicas
entry
here
as
a
simply
simple
template
variable
we
can
override
stuff.
So
if
one
is
loaded
first
and
then
the
next
one
is
loaded,
it
can
override
stuff
from
that
one,
for
example,
by
default
we
have
Lex
enabled,
but
for
whatever
reason
we
disable
Lex
on
test.
D
The
templating
to,
for
example,
disable
a
complete
branch
of
stuff
that
you'd
like
to
deploy
and
now
that
I
have
disabled
lags
for
the
test
environment,
for
whatever
reason,
I'm
doing
that
I
can
go
back
to
the
deployment
out
to
the
CLI
and
if
I
do
a
product
deployment
now
I
would
expect
that
nothing
would
change,
because
I
haven't
changed
anything
for
product,
and
this
is
what
closely
actually
tells
me.
Nothing
has
changed.
I
could
still
try
to
deploy
it,
but
nothing
will
happen
if
I
do
the
same
for
test.
D
D
Let's
do
another
example:
I
should
close
every
other
project
or
whatever
reason
it's
always
opening
the
wrong
one.
So
let's
do
another
example:
let's
say
on
prod
I
want
to
have
two
replicas
for
entry
instead
of
three
yeah.
So
so
till
this
point,
I
had
overwritten
it
with
three
now,
let's
say:
I
want
two.
D
Instead,
if
I
deploy
that
to
prods
now,
what
it
will
tell
me
is
what
would
change
I
can
now
confirm
or
not
say
yes,
and
it
tells
me
what
has
changed
so
that's
why
you
see
two
divs,
all
the
time
What's
Happening
Here
is
it's
doing.
A
dry
run
apply
on
the
server
side.
So
what
you
see
here
is
actually
what
will
happen.
D
There's
kind
of
a
nearly
100
guarantee
that
this
is
the
case
and
what
it
shows
you
here
is
what
really
has
happened
so
before
apply
and
after
applying
entities
between
that
I'm,
also
using
structural
disks,
so
I'm
not
doing
unified
tips,
but
instead
showing
the
Json
paths
where
something
has
changed,
and
then
what
has
changed
there,
which
is
a
lot
easier
to
read
if
you
have
hundreds
of
changes
in
your
deployment
for
whatever
reason?
D
What
else?
In
regard
to
the
delivery
example.
D
Yes,
it
turns
out
I'm
a
little
bit
unorganized
today.
Do
you
have
questions
so
far?
Is
anything
unclear.
D
Nope,
okay,
so
where
to
continue
another
thing
that
I
can
maybe
show
is
which
is
not
part
of
the
delivery
scenario,
but
possible
with
close
DL
is
the
use
of
hell
so
I've
prepared
it
here.
I
have
a
amp
demo
locally.
I
said
it's
not
part
of
seed
delivery
scenarios
that
I
create
pull
request
for
so
I
have
a
Helm
integration
here
which
works
by
specifying
what
hand
chart
I
actually
want.
D
So
the
repo
chart
name,
chart
version,
release,
name
and
what's
namespace
to
deployed
to,
as
you
can
see
in
here,
templating
is
possible.
Basically,
templating
is
possible
everywhere,
including
the
helm
journals,
so
I
can
also
do
templating
here.
I
use
the
Pod
info
as
an
example
to
to
Showcase
it
and
did
some
configuration
here.
As
you
can
see,
the
idea
was
closely
L
is
that
you
should
pre-pull
handshards.
D
Okay,
now
it's
better,
so
glucile
can
work
without
people
link,
but
I
always
suggest
to
do
it,
because
this
gives
the
advantage
of
allowing
you
to
add
third-party
handshots
into
a
git
repository
and
actually
knowing
that
nothing
has
changed
externally
without
you,
knowing
that
it
has
changed
and
also,
if
handshots,
for
whatever
reason
disappear,
and
it
happens
more
often
than
one
likes.
You
still
have
a
copy
of
the
hunt
shot
inside
your
source
code
and
it's
a
lot
faster
to
deploy
stuff
if
it's
already
part
of
your
actual
your
project.
D
So
if
you
have
that
it's
the
same
as
with
everything
else,
just
to
close
CDL
deploy,
let's
say
we
deploy
to
test
and
what
you
see
here
now
is
two
new
objects
would
be
created,
a
deployment
and
a
service.
It
again
asks
me
I'm
fine,
with
set
so
I.
Let
it
deploy
it.
I
should
see,
it
did
appear
now
and
there
it
is.
D
I
will
use
some
point
forwarding
to
actually
show
that
it's
running
there.
It
is,
as
I
explained,
oh
yeah,
so
here
you
can
see
that
it's
pre-pulled,
it's
part
of
CE
project,
now
just
below
a
subfolder,
it's
as
I
said
suggested
to
add
it
to
git,
but
you
don't
have
to,
and
what
you
can
do
now
is,
let's
see
what's
configurable
here,
for
example,
red
is
enabled
or
not
enabled.
D
So,
let's
enable
redis.
This
is
a
good
example,
because
if
you
are
just
able
to
do
this
based
on
git,
you
would
only
see
that
some
variable
has
changed,
but
you
will
not
be
able
to
realize
what
that
actually
means,
because
enabling
redis
doesn't
just
mean
that
something
new
is
deployed.
It
also
means
that
some
configuration
is
happening
in
other
stuff.
That
is,
for
example,
using
redis.
D
So
in
this
case
we
will
not
just
see
that
redis
is
being
deployed,
new
objects,
but
you
can
also
see
that
some
container
is
being
reconfigured
or
some
some
deployment
having
a
cache
server
being
added,
and
now,
if
I'm,
fine
with
that
I
say
yes,
it
does
it.
It
restarts,
and
we
see
red
is
running
here.
D
Is
that
a
good
thing
is
you
can
use
templating
everywhere,
including
the
hand
values
you
can
ifs
and
Elders
here
which
allows
you,
because
there
are
examples.
For
example,
redis
can
be
run
in
two
different
modes:
one
is
the
replicated
mode
and
one
is
the
non-replicated
mode.
If
you
configure
it
or
if
you
change
that
configuration,
you
also
have
to
provide
other
values
that
you
are
not
allowed
to
provide
in
the
other
case,
so
that
means
things
might
require
either
having
two
value
files
or
just
having
a
simple.
D
D
Yeah
in
the
delivery
scenario,
I
of
course,
added
the
test.sh
added
it
to
the
GitHub
workflow,
so
stuff
is
being
tested
automatically.
It
will
download
clue,
CDL,
create
a
kind
cluster
and
do
different
kinds
of
deployments
to
it.
So
it's
basically
the
same
as
all
the
other
delivery
scenarios
so
that
it
can
be
compared.
This
is.
D
Yeah,
what
else.
D
Yeah
as
I
maybe
I've
already
shown
you
can
do
prunes
and
you
can
also
do
deletes.
So.
Let's
say
we
disable
Reddit
again
and
change
the
color
to
Green,
and
if
you
do
that
again,
the
deployment
is
this
or
I
should
have
increased
the
font
size.
Maybe
it's
very
hard
to
see
stuff
as
I
would
assume
wait.
Why
is
nothing
happening.
D
D
We
also
see
that
some
objects
got
orphaned,
meaning
they
are
not
part
of
City
plant
anymore,
but
it
knows
that
it
was
part
of
C
deployment.
So
we
can
do
the
deployment
and,
after
that,
to
a
prune
to
get
rid
of
redis,
for
example,
because
we
don't
need
redis
anymore,
because
for
whatever
reason
we
changed
it
and
then
we
should
be.
Our
redness
is
gone.
D
It's
also
possible
to
use
close
CDL
without
a
close
CDL
project,
so
the
complexity
of
a
project,
so
the
targets
definition
and
the
plumbing
demons
is
not
necessarily
required.
You
can
also
use
a
plain
customized
project
which
I
can
show
as
well.
D
So
it's
let's
go
into
go
into
the
unmodified
customized
deployment
and
what
you
can
do
here
now
is
just
close
to
the
L
deploy
without
providing
a
Target,
because
we
don't
have
any
Targets
right
now.
That
means
it
will
revert
to
the
behavior
that
is
known
by
people
in
the
kubernetes
world,
so
it
will
just
use
the
current
context
and
the
current
namespace.
So
we
are
going
to
deploy
the
current
customize
project
with
the
help
of
closet
L
to
the
default
namespace
and
be
sure
to
see
it
appearing
and
the
default
namespace
circus.
D
Now
the
nice
thing
is,
we
can
use
templating
on
a
customized
project
now
simply
by
let's
say,
for
example,.
D
Whatever
the
entry,
let's
make
this
one
configurable,
let's
call
it
entry
because
it's
actually
arcs.ng
replicas
and
what
you
can
do
now
is
when
we
do
the
deploy,
give
it
an
argument,
entry,
replicas
and
set
it
to
three,
for
example-
and
we
have
the
same
behaviors
that
you
have
seen
before
we
have
the
divs,
it
does
a
full
dry
run.
We
can
say
yes
or
no.
It
will
afterwards
tell
us
what
actually
happened,
which
should
be
the
same
as
what
I
predicted
what
will
happen
and
yeah
we
got
templating
on
top
of
customize.
Now.
D
D
There
is
a
YouTube
video
that
explains
it
a
little
bit
more
and
better
in
a
more
organized
way.
I
can
post
it
if
you
or
you
can
look
it
up.
If
you
go
to
the
raw
code,
Academy
Channel,
which
explains
it
a
lot
better
so
yeah,
that's
it!
F
Okay,
you
already
raised
pull
requests
for
this
repository
right,
yeah.
D
Exactly
so,
that's
everything
that
I
have
just
shown
now
is
part
of
the
delivery
scenario
that
you
can
see
in
the
Potato
Head
project,
so
it's
easily
comparable
to
all
the
other,
all
the
other
stuff,
yeah
I'm,
I
I'm,
waiting
for
reviews
and
questions
and
feedback,
and
so
on.
D
One
other
thing
I
can
say
is
all
this
is
so
what
you've
seen
right
now
is
not
githubs,
but
it's
completely
compatible
to
githubs,
so
you
can
do
whatever
style
of
githubs
you
want,
you
can
run
it
in
a
pipeline.
You
can
also
run
it
through
controller
there's,
a
lucido
controller
available
where
you
have
a
custom
resource
that
defines
your
closer
deployment
that
defines
which
Target
to
deploy
and
which
arguments
to
provide,
and
then
it
can
handle
everything
that
you
like.
The
idea
is
that,
whatever
you
do,
you
can
always
switch
between
modes.
D
You
can
do
zika
tops
style,
for
example,
on
prod,
but
still
do
a
dry
run
or
a
div
on
prod.
Before
you
push
something
or
you
can
revert
to
working
completely
from
the
CLI
in
Dev
environments,
all
based
on
the
same
project
definition,
everything
is
exactly
the
same
and
then
let
githubs
take
over
whenever
you
are
going
to
the
next
environment,
for
example,
to
the
Tesla
Z
product
environment
or
whatever
you
like.
At
the
same
time,
if
you,
for
whatever
reason
realize
stuff
is
broken
on
prod
I
need
to
act
immediately.
D
You
can
still
do
that
because
you
can
go
to
Canine
s
and
do
whatever
fix
you
need
to
do.
I
mean
it's,
it's
bad,
it's
evil!
We
all
know
that,
but
it
happens,
and
the
good
thing
is
that
if
you
use
a
correct
field
manager,
value
close
CDL
will
respect
that,
no
matter
if
you're,
using
githubs
or
from
the
CLI,
because
it's
completely
relying
on
server-side
apply
I
also
created
a
blog
post
a
few
days
ago.
D
Actually,
it's
already
two
weeks
ago
on
the
commuters.io
Block
describing
how
I
use
server-side
apply,
inclusive
l
and
how
it
allows
to
kind
of
let
everyone
live
in
that
live
using
githubs,
not
githubs
pipeline
Ops,
whatever
they
like
I,
also
created
a
cncf
Sandbox
application.
There
is
an
issue
in
the
cncf
project,
sandbox
project
and
I'm,
also
waiting
for
review
and
feedback
on
that
the
project
is
still
very
fresh
I'm,
the
main
on
actually
the
only
maintainer
doing
most
of
the
contributions.
D
I
hope
it
will
get
some
traction
and
also
bring
other
people
to
contributions.
I
hope
that
other
people
are
going
to
use
it.
I
mean
I,
said
it's
very
early.
It
means
it
is
some
risk
for
people
to
actually
start
using
something
like
that,
because
you
don't
know
where
the
project
will
go,
of
course,
but
I
think
it
is
already
stable
enough
that
you
could
use
it
in
production
without
having
too
much
risk
so
yeah.
That's
the
current
status.
F
Okay,
then,
thank
you
for
the
presentation.
One
question
which
I
will
have
is
why
why
should
I
choose
Lucy
telling
instead
of
him.
D
Yeah,
it's
it's
one
of
the
one
of
the
questions
that
always
comes
up
because
it
does
much
more
than
help.
Does
it's
yeah
how
to
explain
it
I
think.
That's
the
easy
way
you
can
structure
your
project
and
the
way
you
can
work.
This
configuration
allows
you
to
handle
much
much
larger
deployments.
D
So
on
one
side,
you
can
have
a
very
simple
deployment.
For
example
the
puptate.
The
deployment
is
a
very
simple
one,
I'm
having
just
a
little
bit
of
configuration,
but
you
can
also
have
very
large
deployments
with
a
lot
of
configuration
from
different
sources.
I
have
just
shown,
for
example,
the
file
as
being
the
configuration
Source
I
actually
have
to
shown
two
sources
of
configuration.
One
is
the
arcs
so
kind
of
the
entry
point
and
based
on
that
I
loaded,
more
configuration
and
based
on
that
I
could
even
load
configuration
from
other
git
repositories.
D
I
could
load
configuration
from
the
cluster
itself
and
so
on
so
I
believe
that
the
way
you
can
structure
your
project
aligns
more
with
what
you
need
for
large
deployment
projects
and
also
for
for
multi,
environment
and
multi-cluster
deployments,
because
it
gives
you
a
lot
of
tools
and
options
to
do
that
in
a
convenient
way
as
an
example.
D
If,
if
you,
if
you
encounter
a
closely
a
project
in
your
company
because
you
just
started
there
or
because
you
have
to
take
over
a
project,
you
just
have
to
know
which
targets
exist,
because
whoever
has
created
that
project
has
already
perfectly
defined
how
to
deploy
that,
because
he
defines
the
target.
He
depletes
the
configuration
required
for
that
and
in
one
of
the
next
versions
it
will
even
allow
you
to
ship
the
cube
config
in
a
zop's
encrypted
way.
G
My
question
would
be
this
is
the
other
co-chair
Architects,
sorry
for
being
late
today,
but
it
was
an
emergency.
How
would
this,
for
example,
work
in
an
environment
where
I'm
already
running
Argo
and
have
my
application
sets
already
fine,
so
that
model
skips?
You
might
have
also
like
some
of
the
target
definitions
that
you
have
right
now,
also,
maybe
in
in
the
github's
tool
already
set
up.
D
So
I
have
looked
into
ago.
Integration
turned
out
to
be
much
harder
than
I
wished,
I'm,
not
sure
if
I
will
be
able
to
have
an
Argo
integration,
because
the
architecture
of
Argo
makes
it
very
hard
to
integrate
something
like
lucidil.
So
you
have
the
plugins,
but
the
plugins
require
you
to
just
write
out
some
manifests
and
then
give
full
control
to
ago
about
the
manifests,
and
that's
not
what
clucidio
is
doing
closely
wants
to
actually
needs
to
deploy
all
this
stuff
with
its
own
implementation.
D
For
for
all
the
features
to
work,
there's
a
lot
of
stuff
that
you
can
control
through
annotations,
for
example,
you
can
do
a
lot
of
conflict
resolution
just
by
defining
just
by
setting
annotations
on
objects.
D
You
can
so
it
wouldn't
be
comfortable
to
something
like
a
city.
Flux
on
the
other
side
is
easier
because
flux
implements
everything
in
a
and
its
own
controller,
which
just
means
that
I
have
to
implement
my
own
controller,
which
I
already
did
so
that
there
is
the
flux,
close
video
controller,
that
does
flux
deployments.
That
does
close
to
your
deployments
in
the
flux
Style,
so
you
can
do
that.
D
It
of
course
means
that
you
cannot
use
plain
customizations
from
flux
anymore.
You
would
have
to
switch
to
close
the
out
deployment,
custom
resources,
which
then
reference
your
project
and
all
the
configuration
yeah.
B
D
Yes,
I
said
githubs
is,
is
easily
possible,
but
probably
not
with
the
existing
tools,
it's
easier
with
flux,
but
it
will
be
hard
with
our
city
if
they
ever
decide
to
change
their
architecture
that
say,
allow
to
give
plugins
or
whatever
they
will
call
it.
At
that
point,
more
controllable
about
the
deployment
process.
I
can
consider
implementing
it
into
Abu.
Until
then,
I
don't
see
how
I'm
going
to
implement
that.
G
G
G
I
mean
you:
don't
have
to
sort
of
cncf
end
user
report
that
all
the
matches,
GitHub,
stooling
and
flux
and
Argo
are
widely
used
with
the
community
and
I
think
it
would
be
good
to
have
an
answer.
What
should
I,
if
I'm,
already
using
flux
for
the
company
if
either
decided
to
use
flux
or
Argo?
What
what
would
it
mean
to
to
combine
these
two
together?
G
D
So
generally
I
mean
you
can
use
both
at
the
same
time,
you
can
even
use
flux
and
Argo
at
the
same
time.
So
nothing
stops
you
from
also
using
close
video
and
what
I
try
to
use.
An
argument
is,
if
you
decide
to
go
for
flux
or
Argo,
you
make
a
decision.
That
means
that
you
fully
go
with
arguin
flux,
because
if
everything
is
deployed
through
flux,
then
everything
is
going
to
deploy
through
flux.
D
D
True,
but
at
the
same
time
it
adds
features
to
customize,
for
example,
substitutions
which
customize
doesn't
support
natively.
If
you
start
to
use
that,
and
I
would
say
that
many
people
are
using
that,
because
it
makes
flux
very
powerful,
then
you
lose
the
ability
to
easily
use
customize
from
the
command
line
the
same
for
Helm.
If
you
start
to
use
the
flux
features
you
kind
of
get
locked
in
into
flux
and
I
assume
it's
the
same
for
obviously
it
didn't
look
too
much
into
rcd
in
that
regard,
but
I
assume
it's
the
same.
D
So
if
you
end
up
with
complex
deployments,
using
all
the
features
from
these
githubs
tools,
you
will
be
completely
dependent
on
githubs,
and
that
means
for
every
environment.
D
At
all
times,
you
will
have
to
rely
on
flux,
I
think
it's
a
lot
of
value
if
you
are
still
able
to
kind
of
revert
to
other
workflows,
for
example,
doing
the
exact
same
deployment
from
from
from
your
console
from
the
CLI
as
an
example.
If
you
have
environments
that
we
run
in
a
special
mode
in
prod,
where
the
github's
part
is
just
replacing
images,
it's
the
only
thing.
It
does
so
it
doesn't
touch
the
infrastructure
related
stuff.
If
you
do
large
changes,
for
example,
introducing
some
new
way.
D
How
stuff
is
communicating
and
I
would
have
to
make
up
an
example.
Now
we
can
mix
that,
so
we
can
let
the
githubs
style
keep
replacing
images
and
when
we
feel
ready,
we
can
use
the
CLI
to
do
the
full-blown
upgrade
which
can
break
and
which
can
mean
that
you
have
to
do
manual
interventions
at
that
point
in
time,
and
so
on
so
having
the
option
to
go
both
routes
at
all
times
and
in
my
opinion,
it's
very
valuable.
G
G
Normally,
we
also,
after
the
recording
available
that
we
can
more
broadly
share
with
people
who
could
not
join
it
today.
Also,
if
you
want
to
have
another
recording
to
be
shared,
because
you
mentioned
one
before
please
just
let
us
know
as
part
of
the
meeting
docs,
and
we
can
also
like
who's
here
with
Mr
wider
community.
B
G
G
F
Okay,
thank
you
for
everyone
who
presented
today,
as
we
said
before,
we
are
already
out
of
time,
so
I
think
the
next
meeting
is
in
around
two
two
weeks
and
two
weeks
that
we
also
have
some
presentations
on
on
in
the
schedule.
At
the
moment.