►
From YouTube: March 3, 2022 - Ortelius Architecture Meeting AUS TZ
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
let
me
share
my
screen
and
I
will
kind
of
give
you
a
quick
update
and
I'm
kind
of
like
where,
where
we're
headed
with
this
so
and
also
joseph,
so
this
is
a
diagram
that
sasha
out
of
south
africa
put
together
around
the
get
ops
process
in
the
get
ops
process.
A
We
kind
of
picked
three
two
or
four
tools,
so
one
of
them
being
github,
obviously
so
we're
focusing
on
github,
initially
captain
artelius
and
argo,
so
kind
of
the
flow
that
we're
looking
at
is
we're
looking
at
the
ci
workflow
part
of
it.
So
this
is
like
the
initial
check-in
process,
so
we
have
our
developers
here
coding
away.
A
They
go
ahead
and
make
their
commit
into
github,
whether
it's
gonna
be
a
pr
or
a
direct
commit,
doesn't
really
matter
at
that
level.
A
From
there
we
go
off
to
the
kept
inside
and
we're
kind
of
using
in
this
example,
kept
in
being
the
the
build
engine.
You
could
easily
easily
replace
this
with
jenkins
or
github
actions.
It
doesn't
really
matter
on
that
front.
The
reason
why
we
chose
captain
because
it
had
a
lot
of
the
event
based
types
of
platforms
or
frameworks
that
we
want
to
move
towards
instead
of
more
of
a
traditional
like
jenkins
pipeline.
A
So
in
the
captain
side
we
do
the
the
push
is
complete.
We
do
a
docker
build
once
the
build
is
complete.
We
push
that
over
to
our
docker
registry.
From
there
we
go
ahead
and
calculate
the
digest
for
that
docker
image.
Just
the
weird
side
effect
with
docker
images,
you
don't
get
a
manifest
digest
until
it's
actually
living
in
a
a
repository,
so
we
go
and
calculate
that
and
that
information.
A
From
there
we
go
ahead
and
tell
captain
says
I'm
done
with
this
information
go
ahead
and
tell
ortilius
about
it.
So
we
have
an
event
being
sent
out
here
that
goes
over
torutilius
and
artillious
is
gonna.
Do
a
couple
things
when
it
when
it
gets
that
event,
one
of
the
things
is
going
to
do.
Is
it's
actually
going
to
create
what
we
call
a
new
component
version?
So
every
time
we
do
a
build,
you
get
a
new
component
version
about
that
artifact.
A
So
all
the
information
from
the
build
system
gets
passed
over
to
ortelius.
We
save
that
as
a
component
version
and
we
then
go
ahead
and
create
a
new
application
version,
so
application
versions
are
made
up
of
component
versions,
so
we
kind
of
have
this
version.
A
A
So
when
we
do
this,
this
update
to
the
values
file,
for
example-
that's
going
to
be
the
one
that
argo
is
going
to
be
listening
on.
Captain
down
here
is
only
listening
for
source
code
changes,
so
it
ignores
that
update
to
the
values
file.
A
When
argo
grabs
it
it
actually
takes
that
values,
file
and
starts
syncing
the
cluster,
whatever,
whichever
one
it
is
to
say
that
it's
it's
in
sync,
with
the
repo
and
once
argo
is
done,
we
need
to
have
argo,
send
an
event
back
over
tortelius
saying
that
it's
been
completed
that
way
we
can
kind
of
log
the
process
so
that
we
know
the
deployment
started.
We
need
to
know
when
it's
completed
and
that's
where
we
get
an
event
coming
back
to
that
way.
A
So
I
think
we
have
one
arrow
wrong
here
on
that
and
then
after
that,
once
ortulius
knows
that
argo
is
done
with
the
deployment.
Ortulius
then
sends
the
event
back
over
to
captain
to
say
that
the
deployment
was
was
completed
at
that
level.
Once
the
deployment
is
completed,
then
we
go
into
the
captain
quality
gate
and
we
go
through
and
check
that
we
have
a
good
quality
gate
going
on.
You
can
sign
kind
of
see
it
down
here
and
basically
we'll
just
do
some
in
our
example
here.
A
We're
just
gonna
do
some
basic
monitoring,
it's
good
to
go,
and
then
from
there
we
move
on
to
the
the
next
stage
of
the
pipeline,
which
is
moving
up
to
qa
at
that
level
and
then,
when
we
get
to
qa
the
process
kind
of
repeats.
A
But
some
of
the
steps
are
skipped
like
the
build
build
step
is
skipped
because
we're
just
going
to
take
the
the
image
that
we
built
previously
and
just
move
it
down
and
deploy
it
as
part
of
that
process
and
same
with
production,
production
and
qa
are
pretty
much
identical
at
that
level,
except
what
happens
at
production
after
we
get
the
quality
gate,
we're
either
going
to
decide
we're
going
to
roll
back
or,
if
we're
going
to
keep
it
there
at
that
level.
A
So
that's
kind
of
like
the
the
high
level
process
that,
where
we've
been
kind
of
focused
around
with
those
tools
and
we're
trying
to
make
everything
event-based
as
much
as
we
can
cloud
events
across
all
the
communication
at
that
level,
except
for
obviously
where
we
go
ahead
and
update
the
git
repo
directly.
A
So
argo
can
pick
it
up
right
now,
we're
not
doing
any
github
an
event
on
the
github
side,
we're
doing
a
direct
update
against
the
repo
between
ortulius
and
github
at
that
level.
So
that's
kind
of
where,
where
were
you
we're
at
just
to
bring
you
up
to
speed
what
we
got
going
on
and
what
we're
trying
to
accomplish?
A
It's,
it's
so
amazing
how
how
complicated
the
the
get
ops
model
is
when
you
start
talking
about
microservices
and
doing
this
at
scale
when,
when
we're
talking
about
this
step
here,
where
we're
updating
the
dev
values,
this
could
happen
across
100
different
poly
repos.
A
When
we
do
the
deployment
of
the
application,
so
an
application
could
have,
let's
say
200
microservices
and
we've
updated
100
of
them.
We
would
actually,
at
this
point,
go
ahead
and
update
100
repos
with
those
new
values
and
argo
would
go
ahead
and
handle
it
from
there.
So
that's
that's
kind
of
like
where
we
have
have
to
wait
for
events
to
come
back.
Just
let
us
know
when
things
complete.
A
So
I
don't
know
if
that,
if
that
makes
sense,
what
we're
trying
to
do
or
if
there's
any
questions
that
you
have.
B
I've
got
a
couple
of
from
thoughts
from
the
the
captain
side,
so
the
first
is
captain
is,
is
the
the
workflow
engine,
so
it
doesn't.
Natively
have
something
to
build
docker
images,
so
I'm
wondering
what
you're
gonna
use
to
actually
achieve
that
bill
because
captain's
going
to
orchestrate
those
steps
but
you're
going
to
need
to
have
something
a
service
in
captain
or
or
use
the
job
execute
a
service
for
something
like
canaco
to
to
build
that
image.
A
A
I
thought
brad
said
that
there
was,
like
you
said,
like
the
job
executor
service
or
another,
for
lack
of
a
better
word
plug-in
that
existed
for
the
docker
build.
Did
I
get
that
wrong?.
B
Yeah
yeah
no
it'll
work,
I'm
just
I'm
just
pointing
out
that
you
know
to
to
suggest
just
for
clarity.
Really,
the
captain
core
sort
of
out
of
the
box
can
build
docker
images.
I
I
don't
want
that
perception
to
be
there.
Basically,
so
yes,
we
can.
We
can
use
something
like
the
job
executor
which
can
run
konico
to
to
build
those
images.
So
that's
that's
absolutely
fine.
The
other
question
I
had
is
about
the
slis
and
slos
for
the
quality
gate.
Where
are
they
being
sort
of
defined?
B
A
We
haven't
thought
through
that,
yet
as
far
as
how
they're
gonna
going
to
enter
them
in
initially
it
was
just
going
to
be
the
slos
slis
at
the
repo
level.
So
for
that
microservice
we
would
have
those
those
the
ammo
files
defined
like
kind
of
like
ahead
of
time,
to
do
the
to
decide
on
what
the
what
we
determine.
As
you
know,
a
good
quality
gate.
It's
for
that
process.
That's
our
thought
right
now,
if
you
have
any
better
suggestions,
we're
open
to
it.
B
B
You
know
for
the
basics
but
ultimately
yeah
as
long
as
they're,
there
captain
isn't
gonna
care
and
even
if
there
was
a
ui
behind
the
scenes,
you're
gonna
be
generating
those
yammer
files
anyway,
so
you
get
to
the
same.
You
get
to
the
same
point.
A
We
could,
at
the
at
the
artillia
side,
one
of
the
things
that
we
are
looking
at
doing
as
an
enhancement.
So
it's
like
the
next
step
after
we
get
some
of
those
basics
working
is
to
bring
in
either
backstage
or
cookie
cutter.
A
So
in
when
you
go
to
a
new
developer,
wants
a
new
new
microservice.
They
would
go
in
and
say
this
is
the
name
of
it,
I'm
the
contact
for
it.
It's
a
python,
fast
api.
I
want
to
write
and
at
that
level
either
backstage
or
cookie
cutter
would
go
ahead
and
generate
the
corresponding
infrastructure
for
lack
of
a
better
word.
You,
like
the
git
repo,
generate
some
stubbed
out
code.
Maybe
configure
you
know,
create
a
base
deployment,
yaml
file,
some
service
yamas
for
kubernetes.
A
Do
all
that
stuff.
You
know
maybe
some
prometheus
configuration
or
istio
at
that
level,
and
then,
as
part
of
that,
we
could
do
the
sli
and
slos
through
one
of
those
types
of
tools
as
well.
B
Yeah
and
the
final
one
was:
how
are
you
gonna,
sir,
because
this
is
something
I've
been
thinking
of
as
well,
is
surfacing
the
quality
gate
result
just
the
past
warning
or
fail
against
either
a
micro
service
or
an
app
itself?
Are
you
going
to
kind
of
surface
that
to
the
user,
or
is
it
kind
of
behind
the
scenes.
A
No
in
in
ortelius
we
can
have
the
that
quality
gate
information.
We're
reworking
some
of
our
our
tables
that
we
have
to
be
able
to
have
like
columns
to
say
what
the
quality
gate
looks
like
at
that
level.
One
of
the
tricky
things
is
ortiz
looks
at
the
the
world
from
across
all
environments
to
all
clusters,
so
it's
like
at
the
50
000
foot
view
looking
down
and
that's
where
it
gets
a
little
bit
tricky
when
you're
looking
across.
A
You
know
dev
test
and
prod
environments,
what
they
look
like
and
what
their
their
quality
looks
like
at
that
level.
So
it's
more
of
like
a
what
we're
running
into
is
how
to
present
it
in
the
ui
other
than
you
know.
We
know
we
can
gather
it.
That's
the
easier
part.
B
Yeah
yeah,
because
I
I
thought
my
initial
thought
was
start
simple
and
for
an
app
at
that
level.
Just
say
you
know
my
app
is
healthy,
because
this
collection
of
microservices
are
healthy
and
then,
if
we
want
to
drill
into
well,
my
app
is
in
a
warning
state.
Okay,
let
me
drill
in
and
see
which
particular
microservices
in
an
error.
You
know
failed
the
quality
gate.
So
I
can.
I
can
then
feed
that
back
to
devs
or
whatever
yeah
as
a
first
cut.
We
could
just
have
it
at
the
application
level.
Yep.
A
And
one
of
the
things
is
also
when
we
start
looking
at
the
across
you
know
if
we
have
200
microservices,
that
we
need
to
look
at
and
determine
the
the
quality
or
the
health
of
that
application
is
made
up
of
all
those
services.
We're
probably
going
to
need
need
to
bring
in
open
policy
agent
to
help
us
determine
that
type
of
aggregation.
A
So
some
of
the
services
may
be
more
critical
than
other
ones
that
you'd
want
to
give
a
higher
weight
to.
So
the
policy
agent
is
something
that
may
come
into
play.
We
haven't
done
much
with
the
policy
agent,
but
we
can.
We
can
kind
of
have
this
vision,
that
policy
agents
going
to
be
needed
down
the
road
to
help
on
the
aggregation
to
determine
if
something's,
healthy
or
you
know
it
actually
does
pass
the
quality
checks
at
an
aggregate
level.
B
It's
funny
you
should
say
that
actually
because
I've
just
raised
an
issue
or
an
enhancement
on
the
captain's
side
to
have
open
policy
agent
again,
I've
never
touched
opa,
so
yeah.
A
B
In
the
same
boat,
I've
heard
of
it
and
it
sounds
cool,
but
I'm
I'm
at
that
stage,
but
I'd
like
to
have
that
for
sequences.
So
you
know
right
now:
anybody
can
run
any
sequence
anytime,
day
or
night
on
anything.
B
Now,
that's
getting
better
with
role-based
access
controls
that
are
coming
in,
but
it
would
be
nice
to
say,
you're
not
allowed
to
run
sequences
during
these
times
during
what
you
know,
whatever
conditions
that
you
specify
an
old
policy
agent
to
me
right
now,
nothing
I
know
about
it
seems
so.
If,
if
anyone,
if
you
know
anyone
who
knows
opa
would
like
it
on
our
opinion.
C
D
Yeah
yeah,
I
know
the
one
of
the
maintainers
of
it
as
well,
so
I
can
invite
him
to
this
meeting
he's
really
friendly.
We
can
present
to
him
what
wonder
and
he
can
guide
us
to
what
to
do
as
well.
Nice.
A
Yeah
and
I
think
it
and
part
of
the
the
opa
is
going
to
be
how
they,
if
we
just
give
them
a
raw
yaml
file
or
if
you,
like,
you
said
adam
someone
for
these
new
beginners
doing
a
ui
of
some
sort
to
make
things
a
little
bit
easier.
Some
like
predefined
check
boxes
or
drop
downs
to
make
make
opa
designations
a
little
bit
easier
for
folks
getting
out
of
the
box,
and
then
maybe
we
just
let
them
default
back
to
editing
raw
yaml
and
do
the
advanced
stuff.
At
that
level.
A
But
that's
that's
kind
of
the
the
the
world
that
I've
been
envisioning.
Like
I
said,
there's
it's
surprising
the
number
of
moving
pieces
for
git
ops.
It's
everybody
thinks
it's
like.
Oh,
I
just
check
it
in
and
it's
done
it's
like.
No.
A
D
Yeah,
I
mean
it's
good
if
you
don't
want
to
test
your
code,
if
you
want
to
just
shift
it,
but
one
thing
I'm
working
on
the
moment
is:
is
sort
of
spiking
argo
roll
ups.
So
I've
decided
to
go
down
the
other
roll-outs
path
just
to
start
off
with,
because
I'm
trying
to
understand
more
about
how
much
of
the
service
mesh
will
do
that
work
that
other
roll-ups
do,
and
I
think
that
the
service
mesh
will
actually.
D
We
won't
actually
need
argo
roll
outs,
but
as
a
learning
curve,
it's
actually
a
little
bit
more
simple
to
to
do
those
different
deployment
strategies,
so
I'm
playing
around
with
argo
workout
rollouts
at
the
moment.
One
question
I
had
for
you,
I'm
quite
interested
in
backstage
at
the
moment
as
well.
It
seems
like
there's
a
lot
of
similarities
to
ortelius
and
backstage
what
are
your
thoughts
on
that
just
understanding.
A
Yeah,
so
there's
there's
two
sides
of
backstage:
one
is
the
odd
onboarding
side
and
when
you
board,
when
a
new
developer,
wants
to
onboard
a
new
microservice,
for
example,
they
pick
a
for
lack
of
a
better
word,
a
template
that
they
want
to
use
and
they
fill
in
some
basic
information.
And
then,
at
the
end
of
the
day,
what
you
have
is
basically
hello
world
for
either
like
your
node.js
that
you
chose
or
a
fast
api
python
program
is
out,
there
deployed
up
and
running
and
all
the
infrastructure,
including,
like
maybe
prometheus
grafana.
A
You
know,
stackdriver
all
that
stuff
is
like
pre-configured
for
you.
So
what
ends
up
happening?
Is
you?
You
eliminate
the
onboarding
process
that
normally
takes
days
into
a
couple
minutes
on
that
side
of
things.
So
that's
the
one
side
of
things
now.
What
ends
up
happening
is
because
the
developer,
that's
the
entry
point
for
developers
to
get
started.
A
We
gather
information
about
like
the
name
of
the
service
who
who
the
owner
is.
You
know
what
their
phone
number
is.
What
type
of
program?
It
is
what
type
of
service
that
they're
writing,
and
because
of
that,
when
you
go
to
the
other
side
of
backstage
and
you
look
at,
I
think
they
can
do
some
through
the
pipelines,
the
under
underneath
infrastructure.
Let's
say
it's
a
jenkins
pipeline
that
deploys
the
helm
chart
out
to
a
cluster.
C
A
Get
this
constant
feedback
from
the
tools.
So
when
jenkins
does
a
deployment
to
a
cluster,
it's
going
to
get
information
back
that
it
that
that
happened
and
that's
where
it
will
go
ahead
and
take
a
look
at
the
health
of
that
you
know,
do
a
basic
health
check
and
do
some
some
information
at
that
level.
I
don't
remember,
I
don't
think
it
has
a
concept
of
an
application,
so
it's
basically
at
a
service
level.
So
you
can't
see
this
service
is
being
consumed
by
you
know.
A
15
other
applications,
and
also
it
has
no
versioning.
So
whatever
is
running
in
production,
is
the
one
that
you
get
the
view
of.
So
you
don't
have
a
view
of
what's
happening:
development,
qa
or
production
like
what
one
of
our
our
customers
has
17
qa
environments.
A
So
that's
kind
of
the
difference
now
cookie
cutter
is
similar.
Cookie
cutter
is
a
python
based
templating
engine.
I
can't
remember
what
spotify
wrote
backstage
and
I
think
go
for
some
reason
is
my
guess,
but
cookie
cutter
is
a
python
based
and
then
they
have
just
the
same
type
of
of
templating
and
stuff
that
happens
at
that
level.
Now
cookie
cutter
doesn't
have
the
it
has
the
onboarding
part,
but
not
the
monitoring
part
from
what
I
remember.
D
No,
no,
no!
No,
it
does
yeah,
because
I'm
very
fascinated
with
backstage
at
the
moment-
and
I
was
really
yeah
thanks
for
the
exclamation.
I
was
really
wondering
the
differences
between
it
and
that's
good
to
know.
A
Yeah,
so
that's
why
it's
actually
a
really
good
fit
for
ortillius,
because
we
can
gather
that
information
through
the
ui
about
the
service
and
then
we
go
and
do
the
onboarding
part,
and
we
also
do
the
component
versioning
part
at
the
same
time,
so
that
allows
us
to
extend
the
monitoring
side
of
backstage
as
as
as
robust
as
we
can
make
it
with
artelias.
A
So
that's
where
we'll
be
able
to
grab
that
that
information
aggregate
it
together
and
be
able
to
look
at
the
aggregation
of
an
application
across.
You
know
hundreds
of
microservices
across.
Maybe
I
just
who
is
it
chick-fil-a
who's
a
a
fast
food
chain
here
in
the
states
they
have
3
500
kubernetes
clusters,
one
for
every
store
in
the
states.
A
Have
no
idea
how
they
do
it,
I'm
supposed
to
have
we're
trying
to
get
some
more
detail
from
them
on
how
they
actually
roll
that
out,
but
there's
basically
3
500,
prod
environments.
D
Yeah,
that's
the
only
way
I
could
see
that
scale
is
cluster.
Api
would
be
good
for
that.
A
Yeah
it's
and
that's
like
the
same
with
like
airbnb
and
uber,
both
of
those
companies
run
in
every
single
region
and
every
single
aws
across
the
world
so
and
they
so
they're
in
like
for
production.
It's
in
several
hundred
clusters
that
they're
running
at
that
level
and
what
they
do
is
at
least
say.
I
think
uber
they'll
they'll
deploy
to
a
specific
region
as
a
canary
deployment.
A
So
they'll
roll
out
to
a
a
prod
version
in
that
region,
see
what's
happening
and
then,
from
there
they'll
start
rolling
out
across
all
the
rest
of
the
aws
regions
and
sub-regions
and
all
that
stuff.
D
That's
awesome,
I
didn't
imagine,
look
at
the
scale.
A
Yeah,
so
you
don't
you
don't
you
know
like
what
you're
looking
at
on
the
argo
side
with
the
rollout
or
whatever
it's
called
roll
up
or
roll
out.
I
can't
remember
no,
not
so
much
yeah
so
that
they
don't
use
for
like
uber,
because
they're
they're
just
going
right
to
prod.
They
don't
even
worry
about
the
like
a
canary
50.
You
know.
C
A
D
Yeah
it's
funny
because
whenever
for
some
reason
you
know
australia
and
new
zealand
that
they
always
get
rolled
out
first,
because
it
doesn't
overlap
with
europe
or
or
the
americas,
so
there's
less
people
here,
so
we
always
get
the
bugs
when
they
think
supposed
to
get
right
up.
It
always
happens
in
azure.
B
Yeah,
I
just
had
another
quick
thought
as
well,
while
we're
talking
my
brain's
churning
since
since
we've
already
got
the
captain
cluster
and
the
job
executor
will
be
installed
anyway.
B
Would
it
be
just
a
thought
for
future
not
now,
but
putting
like
the
ability
to
define
custom
tasks
or
extra
things
that
users
might
want
to
do
via
ortillius,
but
under
the
hood
that
gets
translated
into
basically
a
job
executor
function
in
the
background,
so
I
don't
even
have
a
real
concrete
example.
I'm
just
thinking
to
put
some
sort
of
high
level
abstraction
on
captain
to
say
you
want
to
do
this
custom
thing
that
doesn't
come
out
the
box
with
ortillius.
A
Yeah,
I
think
that
would
be
moral
on
the
lines
of
the
cookie
cutter
or
backstage
as
we
stand
up
the
the
pipeline
process
that
that
would
be
those
would
be
the
tools
dropping
in
those
custom
executors
as
part
of
that
pipeline.
You
know
now,
if
it's
you
know
when,
if
something
like
that
shared
across
all
the
developers,
you
know,
then
it
makes
it
easy.
A
We
could
do
it
just
once
or
when,
when
you
onboard
a
new
service
to
make
sure
that
it's
already
existing
in
the
cluster
as
part
of
the
captain
configuration
at
that
level,
so
I
think
that
would
be
that's.
One
of
the
things
that
we're
trying
to
do
is
to
one
of
the
goals
is:
try
to
have
ease
of
of
entry
point
into
ortillius
and
into
the
whole
event-driven
pipelines.
A
So
I
think,
like
along
those
same
lines,
maybe
like
a
a
predefined
shipyard
of
the
the
typical
you
know,
five
scenarios
that
you
have,
whether
you're
gonna
have
you
know
dev
test
prod
or,
if
you're
going
to
go.
You
know
dev
test
qa,
uat
beta
alpha
prod.
You
know
whatever
the
scenario
is
that
we
have
some
predefined
shipyards
that
they
could
utilize
at
that
level.
D
Yeah,
hey
folks,
sorry,
I've
got
to
run.
I've
got
a
meeting
adam.
I
think
you
sent
me
a
message,
but
I
can't
I
I
couldn't
figure
out
what
slack
channel
or
whatever
you
sentenced
me.
So
I
don't
think
I've
responded
to
you
yeah.
That's
all
right!
I'll
cut
I'll
catch
up
with
you
soon
I
haven't
talked
to
you
for
a
while
cool,
no
worries
right,
all
right
and
yeah.
Sorry,
I've
got
around
we'll
update
on
discord.
A
Now
the
interesting
thing
is
in
ortelius
anything
can
be
a
component,
so
we
can
actually
have
a
component.
That
would
be
that
job
executor
and
as
a
prerequisite
to
my
let's
say
my
component,
I
get
a
prerequisite
that
is
a
a
captain.
A
Docker
build
executor
that
we
could
make
that
as
a
component
that
would
get
installed
before
the
other
component
would
actually
use
it.
So
there
is
a
way
from
the
artillious
side
that
we
could
bring
in
and
kind
of
do
those
on
the
fly
as
part
of
the
process.
B
A
A
You
know
the
environment
that
you're
deploying
to
information
about
that
attributes
from
the
application
attributes
from
the
endpoint
and
then
application
attributes
from
the
component,
and
they
kind
of
build
up
this
stack
and
we
can
do
pre-actions
at
like
the
application
level
to
make
sure
that
everything
is
already
installed,
and
I've
done
that
before
with
our
our
aws
demo,
we
actually
as
part
of
the
the
deployment
we
have
a
pre-action
that'll
rescale,
our
cluster.
So
we
take.
A
A
A
Yeah
it's
this
is.
This
is
all
evolving,
pretty
quick!
So
now,
so
where
is
it?
I
think
it's
a.
I
can't
remember
if
it's
kubecon
or
cdcon
they're,
the
cd
foundation
just
started
a
new
project.
You
probably
heard
it
was
a
cd
events
project
and
they
have
a
full
day
to
talk
about
events
like
a
day:
zero,
cd
events,
things
so
I'm
gonna
kind
of
I'm
going
to
try
to
present
basically
to
walk
through
and
our
get
ops
model
to
as
a
presentation.
A
If
you're
interested
I
can
get,
we
can
probably
get
some
slots
for
you
to
talk
about.
You
know
what
you
got
going
on
from
the
the
captain
side,
and
you
know
on
your
thoughts
around
events.
B
Yeah
definitely
I
actually
I've
already
reached
out
to
andy
and
rob
on
to
two
guys
on
the
captain's
side.
I
think
one
of
them
are
doing
a
presentation,
so
obviously
don't
want
to
double
up
if
they're
doing
already
doing
I'll.
Let
me
let
me
chase
them
on
that,
but
if,
if
not,
then
yeah
I'm
more
than
happy
to
to
put
something
together.
A
Yeah,
I
can't
remember
which
one
it
is
if
it's
so
kubecon
is
in
may
and
in
spain,
and
then
cdcon
is,
I
think,
in
june,
in
texas,
or
something
like
that
in
the
states
here.
So
I
think
they
are.
I
know
the
cdcon
one
is
hybrid,
that
they
we've
requested
a
hybrid
model
for
doing
our
day,
zero
presentations.
B
The
other
project
you
might
want
to
talk
to
on
just
on
that
quickly
is.
There
is
a,
I
think,
it's
a
special
interest
group
for
translation
between
different
cloud
events.
So
you
know
you've
got
the
captain
cloud
events.
You've
got
ortillius
you've
got
tool
x
over
here,
but
they're
all
different
cloud
event
types
so
see.
Then
yeah,
there's
a
there's:
a
kind
of
translation
project
going
on
I'll
dig
out
some
info.
If
you
don't
know
about
it
already,
yeah.
A
Drop
drop.
It
drop
me
the
the
info
or
just
even,
if
you
just
find
the
name
of
it,
I'll,
hunt
down
the
rest
yeah
and
destroy
I'm
on
discord
all
the
time,
so
just
jump
on
the
artillious
discord
and
just
drop
it
in
the
whole
group,
because
people
drop
things
in
there
all
the
time
about.
A
A
All
right
cool:
well,
it
looks
like
ben
dropped
and
brad
dropped.
So
anything
else
from
your
side.
B
No,
do
you
mind
if
I
can
post
this
recording
in
the
captain
channel
just
so
they're
aware
of
what's
going
on.
A
Yeah
as
soon
as
it,
I
recorded
it
to
the
cloud
so
as
soon
as
I
get
it,
zoom
lets
me
know
that
it's
been
rendered
I'll,
get
I'll,
get
the
link
off
to
you.
B
Nice
one
and
yeah,
as
always,
let
me
know
if
you
know
where
to
find
me
if
you
need
anything.