
►
From YouTube: Knative Meetup Community #7
Description
This virtual event is designed for end users, a space for our community to meet, get to know each other, and learn about uses and applications of Knative.
On Nov. 18, 2020, we heard from working groups updates and there was a demo presented. There will be one demo: "Mink a distribution of Knative and Tekton", presented by Mike Moor, Knative Co-Founder and TOC at VMware.
A
Started
recording
and
we
are
going
to
get
started
with
this
meetup,
so
welcome
everybody.
This
is
the
seventh
canadian
community
meetup,
as
I
was
saying
before.
This
is
gonna,
be
the
last
meetup
of
2020
and
we
hope
to
see
you
again
in
january.
We
may
have
a
few
new
people
here
because
we
share
this
event
with
kubecon
north
america
participants.
A
So
if
anybody
is
here
for
the
first
time
welcome-
and
we
hope
that
you
can
come
and
join
us
in
other
events
as
well
and
without
any
further
ado,
I
am
going
to
go
ahead
and
start
with
the
agenda.
We
have
working
group
updates
from
client
working
group.
B
It's
a
client
based
apply,
so
it
might
be
have
still
some
rough
edges,
but
actually
you
can
already
use
it,
but
actually
we
are
always
keen
on
getting
feedback
on
this
new
feature,
as
well
as
on
the
other
feature,
which
is
a
k
and
service
input.
This
is
the
counterpart
of
the
cane
service
export
that
we
have
introduced
in
the
release
before
it's
also
marked
as
experimental.
B
So
it
allows
you
to
export
and
import
cane
services,
along
with
its
revisions
that
are
active,
active
means
that
they
receive
a
certain
amount
of
traffic
so
that
they're
part
of
a
traffic
split
and
yeah.
So
that
way
you
can
easily
transport
your
services
from
one
installation
to
the
other,
one
yeah,
that's
more
or
less
it.
So,
actually
again,
it's
experimental.
So
if
you
have
any
feedback
on
on
this
new
feature,
we
are
really
super
happy.
If
you
just
jump
to
the
slack
or
open
issues
or
what
what
channel.
We
have
thanks
a
lot.
C
Yeah
so
roland
inspired
me
to
advertise
and
solicit
feedback
on
a
new
domain
mapping
resource
that
dr
jules
added
in
our
most
recent
release.
The
domain
mapping
resource
basically
lets
you
put
vanity
urls
in
front
of
your
k,
native
services
and
hopefully
soon
other
things
within
k
native.
So
you
know
instead
of
having
the
sort
of
rigid
food
up
r
dot,
you
know
cluster
domain
suffix.
C
You
can
say
you
know
www.mathsawesomeblog.com
and
serve
on
friendlier
urls
if
you
want
to
it's
an
alpha
and
there's
a
vanity
domains
channel
on
slack.
If
you
want
to
ask
specific
questions
about
that,
but
you
know
you
can
ask
a
question
and
any
of
our
dozens
of
other
channels
too,
and
you
know
I'm
sure,
we'll
be
happy
to
answer
it,
but
yeah.
C
Try
it
out
give
feedback
it
does
it
isn't
integrated
with
things
like
auto
tls
yet,
but
that's
on
the
plan
but-
and
I
think
it
works
with
istio,
contour
and
career,
as
maybe
others
I,
but
that
those
are
the
ones
that
I
have
confidence
saying.
You
know
you
know,
report
bugs
if
it
doesn't
work
with
those.
A
Awesome
thank
you
much
and
to
give
feedback
matt
added
a
channel
on
slack,
which
is
vanity
dash
domains,
so
yeah
feel
free
to
to
provide
feedback.
That
way,
any
other
updates
that
may
not
be
in
the
agenda.
A
A
Okay
cool,
so
we
are
going
to
move
to
the
demo.
Part
of
the
meeting.
Matt
is
going
to
present
a
demo
about
mink
and
I
think
he
should
be
able
to
share
your
screen
so
feel
free
to
test
that.
C
All
right
so
hold
on.
Let
me
make
the
little
like
people
window
small
so
that
I
can
see
my
screen.
Can
everyone
see
my
screen?
I'm
sharing
my
whole
screen,
because
I'm
gonna
jump
between
windows
as
part
of
the
domain
or
sorry
demo,
okay,
so
mink!
Hopefully
we
get
through
it
all.
I
have
a
whole
bunch
of
stuff
that
I
want
to
demo.
So
apologies
if
I
talk
fast
but
okay,
so
what
is
mink?
C
C
You
know
much
smaller
form
factor
for
how
you
want
to
ship
k-native
downstream,
and
so,
but
we'll
we'll
see
that
in
just
a
second
and
then
the
other
piece
is
a
cli
that
started
with
me
sort
of
wanting
to
try
out
the
kn
plug-in
model,
but
also
start
to
sort
of
think
about
things.
You
could
do
with
the
set
of
components
that
have
been
shrink.
Wrapped
into
mink
to
sort
of
tie
together,
you
know
the
things
that
you
know.
C
We
try
and
keep
sort
of
at
arm's
length
upstream
like
serving
and
eventing,
and
the
artists
formerly
known
as
build,
which
is
techton,
so
I'm
gonna
switch
over
to
my
console
here.
C
So
one
of
the
things
this
is
long
command,
but
I
can
talk
while
it
runs
so
we're
going
to
run,
link
install
command.
This
is
just
going
to
set
up
the
components
on
the
cluster
that
are
correspond
to
the
release
of
the
cli.
That's
been
installed.
I
haven't
yet
set
it
up
to
do
anything
with
dns,
so
I'm
going
to
patch
the
mink.dev
domain
into
this
and
then
print
the
service,
so
we
can
set
up
dns.
D
C
Okay,
I'm
gonna
keep
talking.
Hopefully
it's
on
the
recording,
so
this
this
is
a
local
build
of
it.
So
it's
installing
it
from
you
know
something
that
was
built
locally.
If
you
were
running
the
release,
this
would
be
installing
from
the
latest
or
the
github
release
corresponding
to
the
mink
build.
This
is
running
on
against
an
eks
question.
C
You
can
see
it
spun
up
a
couple,
control,
plane,
pods
and
then
a
couple
data
plane,
pods
the
control
plane
pods
are
running
serving
with
net
contour
contour
eventing
with
the
multi-tenant
broker,
the
sugar
controller
and
all
the
core
sources.
C
And
then
the
data
plane
is
running
all
of
the
things
that
we
have
on
the
data
path
in
k-native,
so
activator
the
envoys
for
your
ingress.
C
The
activator
etc,
and
then
it
also
installed
the
in-memory
controller
or
sorry,
in
my,
I
always
say
in
memory
controller
in-memory
channel
as
well.
So
I
printed
out
the
services
so
that
I
could
set
up
a
little
cname
rule,
real,
quick,
so
I'll
plop
that
in
and
so,
if
we
take
a
look
at
this
right,
let's
see
33
so
in
about
40
seconds.
It
installed.
C
Basically,
the
entire
mink
control,
plane
data
plane,
the
in
memory
channel-
and
you
know
all
it
waits
for
all
of
that
to
be
up
and
ready,
and
so
all
of
that
is
now
running
on
my
eks
cluster
here
and
then
dns
should
be
set
up
now
it
runs
tls
by
default.
So
okay,
so
we
just
installed
a
whole
bunch
of
stuff,
and
it's
actually
not
running
that
many
things.
C
The
the
three
control
plane
nodes
and
the
four
data
plane
nodes
is
because
it
installs
it
in
a
nha
configuration
by
default.
So
the
the
control
plane
is
running
as
a
stateful
set
that
distributes
the
keys
over
those
replicas
and
the
data
planes
are
running
as
a
demon
set.
So
if
we
were
running
this
on
a
one
node
cluster
that
would
shrink
down
and
scale
up
as
your
cluster
scales
up
so
okay.
So,
let's
see
so
now
that
I
have
all
of
those
things
installed
on
my
cluster
right.
C
The
next
thing
that
I
wanted
to
do
is
start
to
play
with
basically
deploying
you
know,
leveraging
the
fact
that
all
of
those
things
are
there
to
make
it
really
easy
to
get
started
and
deploy
things
to
that
cluster
right.
C
So
I
affect
so
I'm
in
the
docs
repo
here
and
I
effectively
wanted
to
be
able
to
write
something
like
kn
service
service,
create
hello,
and
I
you
know,
deploy
my
you
know
hello
world
go
sample,
but
you
know
we
run
into
this
thing
where
it's
like
okay
image,
right
and
now
I
need
to
be
able
to
build
this
docker
file
and-
and
you
know,
do
all
that
so
with
techcon
available
right,
I
can
do
builds
against
this
cluster,
and
so
basically
there's
this
mink
build
command
where
you
can
tell
it
what
directory
to
build
and
let's
see
it's
a
long
path.
C
So
I
gotta
concentrate
hello
world
hello!
Go
okay,
so
matt
you've
got
a
typo
now
director
equals
doc.
Thank
you.
C
It's
way
worse,
if
the
typo's
in
the
other
half
of
the
command,
because
it
doesn't
tell
you
until
the
build
completes,
but
thank
you,
jacques,
so,
okay,
I'm
gonna
kick
that
off.
So
what
this
is
gonna
do
right
is
what
let
me
show
what's
running
over
here.
What
this
is
doing
is
it's
basically
taking
that
directory.
C
It's
uploading
it
to
a
registry
that
I've
configured,
I
think
I'm
using
github's
container
registry
and
as
a
self-extracting
container
image
and
then
runs
that
as
the
first
step
in
a
task
run
before
tonic
conico,
which
does
the
dockerfile
build.
You
can
see
the
output
streaming
here
to
standard
error
and
I've
redirected
standard
out
which
is
going
to
be
the
image
digest
to
can
service
create.
So
as
soon
as
this
is
done,
this
is
going
to
you
can
see
the
build
here
running
on
the
cluster.
C
As
soon
as
this
is
done,
we
will
see
kn
kick
off
and
start
to
deploy,
I'm
just
moving
the
zoom
things
out
of
my
way.
So
I
can
see
this
and
in
just
a
moment
this
will
return.
I'm
gonna
wait
a
little
bit
longer,
because
this
didn't
give
me
the
tls
url.
C
C
And
curl
it
right
boom
hello
world.
So,
okay,
these
things
have
been
up
for
about
five
minutes.
So
in
five
minutes
we
went
from
blank
kubernetes
cluster
to
writing
all
of
serving
eventing
tecton
with
tls
dns
and
built
from
source
and
deployed
an
image.
So
one
more
thing:
let's
see
so
if
I
dump
the
service
that
we
just
deployed-
and
I
grab
the
image
that
it's
running
and
look
at
it
in.
C
Crane,
if
you
were
watching
carefully,
you
might
have
picked
this
up,
but
I'm
actually
running
this
all
on
a
graviton
eks
cluster
right.
So
this
is
all
running
on
arm
right.
It's
running
the
the
self-extracting
source
context
was
uploaded
multi-arch.
It
runs
multi-large,
conical
image,
it's
running
all
the
multi-arch.
C
You
know
envoys,
all
the
k-native
and
tecton
components,
and
so
basically
you
know
it
installs
on
whichever
flavor
of
cluster
you
want.
The
one
thing
that
doesn't
work
which
is
going
to
be
the
next
part
of
the
demo
is
unfortunately
build
packs,
and
I
think
that's
mostly
because
I'm
not
aware
of
anyone
who
has
yet
produced
an
arm
64
build
pack
builder,
but
for
that
we're
going
to
switch
over
to
a
gke
cluster
for
the
rest
of
my
demo
and
gk
runs
a
lot
of
stuff.
C
So
I'm
just
going
to
show
the
default
namespace
for
this
one,
but
this
I've
installed
mync
on
this
cluster
already
and
yeah.
So,
but
nothing
is
running
if
I
say
cube,
cuddle,
get
k,
service,
all
name
spaces,
there's,
there's
no
c
native
services
deployed
or
anything
yet
so,
okay,
so
build
packs.
C
So
if
I
could
just
do
the
same
old
demo
with
the
docs
repo
and
replace
mync,
build
with
mync,
build
packs
or
sorry
make
build
pack,
and
it
would
do
a
build
pack
build
to
that
same
hello,
world
go
sample
and
it
works.
It
wouldn't
leverage
the
docker
file
and
you
know
through
the
magic
of
build
packs.
It
would
detect
hey
it's
go
and
do
the
right
thing
and
build
a
go
container
image,
which
is
pretty
nice,
but
one
of
the
nice
things
about
build
pack.
C
That
is
that
a
lot
of
folks
are
starting
to
leverage
it
for
higher
level
experiences.
So,
instead
of
producing
you
know,
applications
where
you
have
to
write
the
http
server
and
deal
with
all
that.
You
know
you
just
write
a
function
and
then,
as
part
of
the
build
pack
life
cycle,
it
gets
wrapped
up
in
the
http
server
and
you
don't
worry
about
it.
You
just
deal
with
functions
and
deploy
that.
C
I
wanted
to
think
through
what
it
was
like
to
deploy
sort
of
a
full
logical
application
which
may
consist
of
a
number
of
different
functions
and
other
things
like
sources
and
whatnot
and
are
all
sort
of
wired
together
in
terms
of
yeah.
You
know
triggers
and
events
and
whatnot
so,
okay,
so
I
have
a
little
sample
here.
C
Basically,
what
we're
gonna
do
here
is
we're,
just
gonna
run,
mink
apply,
and
then
this
is
going
I'll
talk
through
what
this
is
gonna
do,
while
it's
running
it
takes
about
a
minute
and
a
half.
If
history's
any
indicator,
it
spits
out
some
errors
because
it
tries
to
detect
the
user
to
run
as
but
ignore
that
that's
a
gct
builder
thing
I
mean
so
I'm
using
the
gcp
build
packs
here
for
node,
as
you
can.
Oh,
I
didn't
put
it
in
the
repo
name
here.
C
It's
in
the
repo
name
here,
okay,
so
this
is
what
we're
deploying
right
now,
so
I'm
using
scotty's
little
graph
tool
to
visualize
this.
It's
not
running
on
the
cluster
now,
but
it
was
obviously
when
I
took
the
screenshot
so
basically
that
one
command
is
basically
deploying
a
whole
bunch
of
stuff.
So
it's
deploying
a
ping
source
that
dumps
events
onto
the
default
broker.
C
It's
deploying
sockeye,
which
is
unfiltered
and
going
to
let
us
visualize
the
events
flowing
through
the
system
and
then
it's
going
to
deploy
five
functions,
which
key
off
of
the
ping
events
that
come
off
the
ping
source
and
do
some
simple
mutation
to
the
event
and
dump
it
back
on
the
broker
so
that
we
can
see
what
they
did.
C
C
Okay,
all
the
builds
just
completed,
and
then
it
applied
a
whole
bunch
of
stuff-
and
you
can
see
over
here-
that's
good
timing,
so
I
didn't
even
rehearse
that
all
right,
so
you
can
see
here
it's
spinning
up
all
these
functions
here
and
if
I
refresh
sockeye,
it
should
be
up
and
the
ping
source
fires
every
minute,
good
timing,
okay,
so
hopefully
everything
was
wired
up
when
that
fired.
C
It
looks
like
a
couple:
things
were
still
wiring
up,
but
the
ping
source
produces
this
fixed
payload
with
a
is
a
number
b
is
another
number
11
and
3,
and
then
each
of
the
little
functions
responding
to
this
you
can
see.
This
is
exponent.
It
does
a
to
the
b
and
then
returns
it
to
the
broker
and
b
is
b
to
the
a.
So
each
of
these
does
just
a
little
trivial
transformation
of
the
event
and
drops
it
back
on
the
broker.
C
C
So
if
we
go
into,
for
instance,
the
add
function
all
right,
so
there's
there's
three
files
in
here
and
it's
more
or
less
the
same
three
files
in
each
of
the
functions,
they're
very
uniform.
So
if
we
look
at
the
actual
index.js
I
I
am
a
nude
noob.
C
So
please
don't
judge
my
terrible
node,
but
there's
a
simple
little
mutation
function
here
that
takes
in
the
json
payload
and
you
can
see
you
know,
adds
that
returns
something
with
a
and
b,
where
a
is
just
a
plus
b
of
what
comes
in
and
b
is
a
minus
b
of
what
comes
comes
in,
and
all
of
these
are
just
trivial
things
that
do
stuff
like
that,
and
then
this
is
what
we
actually
wrap
as
a
function,
which
basically
applies
that
mutation
here
to
a
cloud
event
that
comes
in
and
then
returns
it
with
a
new
cloud
event
and
this
this
changes
the
type.
C
So
it
tells
us
what
function
really
reacted
to
it
so
that
we
can
visualize
that
on
sockeye,
the
next
bit
of
what's
going
on
here
is
over
is
a
tunnel.
So
this
is
a
little
bit
of
the
duct
tape
showing
through
in
terms
of
how
I
put
some
of
this
together,
so
this
mirrors
the
construct
of
project.toml
in
build
packs.
So
you
can
see
I'm
using
the
google
functions,
build
pack
and
telling
it
to
use
the
add
function.
C
This
is
you
know,
bill
peck's
right
now
and
really
this
is
the
pax
cli.
Has
this
concept
of
project.tamil,
but
it
doesn't
really
have
a
way,
a
good
way,
in
my
opinion,
to
deal
with
sort
of
producing
end
things
from
a
single
repo.
So
that's
where
this
override
tomml
thing
comes
in
and
then
service
diagonal.
So
here's
where
the
magic
happens.
C
So
a
lot
of
this
idea
of
sort
of
trying
to
squeeze
the
incremental
complexity
came
from
some
of
the
learnings
we
have
with
co
and
so
with
co,
like
I
think,
hopefully,
we've
reached
the
point
where
a
lot
of
folks
within
the
community
sort
of
appreciate
that
you
know
producing
new
containers,
is
you
know,
a
very
low
barrier
to
entry
and
that's
not
really
what
folks
think
of
as
the
release
artifacts
anymore.
We
think
of
things
like
the
resolved
yamls.
C
So
you
can
see
in
this
yaml
this
one
piece
of
magic
here,
where
sort
of
extending
the
idea
of
the
the
co
uris,
where
I
say
co,
import
path,
mink
apply,
make
resolve
supports,
it
actually
supports
co,
but
also
build
pack
and
docker
file.
So
you
can
take
the
sort
of
co
workflow
and
use
it
to
build
things
with
for
many
languages,
with
build
packs
or
docker
files
and
start
to
leverage
some
of
the
stuff
in
build
packs
like
function
frameworks
to
build
higher
level
experiences.
C
So
what
this
is
basically
saying
is
within
the
within
the
context
of
the
sort
of
dash
directory
bundle.
That's
uploaded
build
the
for
build
pack
build
the
overrides
tomml
within
the
add
directory,
and
so
each
of
the
functions
is
going
to
specify
the
path
to
their
config
file
for
build
pack.
That
does
that,
if
you
were,
if
we
were
to
use
docker
file
here,
it
would
effectively
be
the
directory
in
which
to
find
the
docker
file
and
co.
It's
compatible
with
go
so
so
this
is
cluster
local
service.
C
It's
setting
up
a
trigger
on
the
default
broker
and
it
has
the
ping
filter
here.
If
you
were
to
change
this,
since
this
is
ad,
if
we
were
to
change
this
to
divide,
we
would
start
to
sort
of
chain
together
our
events
into
a
deeper
chain
than
that
sort
of
flat
thing.
We
have
right
now
and
then
this
delivers
the
events
to
the
add
function.
C
I
think
it
would
be
super
cool
to
see
this
go
away
with
something
like
scotty's
auto
trigger,
but
but
yeah
right
now.
This
is
actually
a
surprising
amount
of
yaml,
but
like
to
do
this
in
in
vanilla
case,
would
be
much
much
much
more
complex,
so
okay,
so
I
think
we
got
so.
Let's
see
all
of
the
functions
firing,
they
should
have
all
fired
by
now.
Okay,
so
random
basically
uses
the
inputs
to
bound
generating
random
numbers.
We
see
divide
here,
we
see
swap
just
transposes
them.
C
We
already
saw
exponent,
add
is
11
plus
3
and
11.
Minus
3.,
yeah
and
divide
is
11
divided
by
3
11
mod
3.
So
all
super
simple
little
functions,
but
you
know
I
got
bored
of
adding
them
after
five,
so
so
yeah,
but
you
know
the
overhead
of
each
of
them
got
squeezed.
I
think
pretty
low.
C
I
see
a
lot
of
things
in
chat.
I
don't
know
if
folks
are
adding
or
asking
me
things,
but
I
think
that's
all
I
had
for
the
screen
share
unless
folks
had
questions,
so
I
am
happy
to
open
it
up
for
questions,
and
hopefully
I
covered
everything
I
meant
to
cover.
A
Thank
you
so
much
matt
yeah,
I
think
my
mic
is
working.
Let
me
see
I'm
gonna
change
the
view.
Okay,
thank
you.
Do
we
have
any
comments
to
discuss
this
demo
as
a
group
right
now
any
questions.
A
I
saw
some
questions
on
the
on
the
chat,
so
maybe
alec,
if
you
want
to
speak
up,
is
that
a
possibility
for
you?
Would
you
like
to
ask
your
questions.
E
Sure
I
can
speak
cap,
it's
it's
just
a
very
basic
question
so,
like
there
is
a
lot
of
functionality
at
the
beginning,
he
was
saying
it's
like
a
distribution.
E
So
how
is
it
different
from
what
is
today,
if
you
install
k
native
and
techton
in
some
cluster,
there
is
some
additional
functionality
that
you
add
like
support
for
buildbacks,
something.
C
Like
that,
that's
a
good
question.
So
so,
if
you
were
to
install
all
the
upstream
stuff
right,
we
you
know
upstream
is
very
unopinionated
right.
We
we
ship
serving
core.
We
ship
net
istio,
we
ship,
istio
they're.
You
know
all
basically
in
separate
yamls
run
as
separate
things,
and
we
do
that
because
it's
very
unopinionated
right
like
we,
don't
want
to
sort
of
guide
your
hands
towards.
C
You
know
one
option
or
the
other
right,
like
they're
all
great
choices,
but
there's
it
leads
to
complexity.
Setting
things
up
right.
You
have
to
make
a
whole
bunch
of
choices
right
which,
which
channel
do
I
want
to
use
which
broker
do
I
want
to
use?
Which
ingress
do
I
want
to
use?
Do
I
want
to
set
up
tls
and
whatnot
right?
So
so
what
mync
does?
C
Is
it
basically
takes
a
position
on
some
of
those
sort
of
getting
started,
opinions
and
glues
them
together
in
a
way
where
it's
not
distributing?
You
know
different
yaml
different
deployment,
different.
What
not
for
all
of
those
things
and
it's
shrink
wrapped
into
effectively
a
single
controller
process
that
handles
all
of
it
right.
The
apis
that
are
exposed
and
everything
is
identical
to
upstream
it
it
is
identical
it
does.
C
It
exposes
zero
additional
apis
over
canadian
surveying
and
canadian
eventing,
so
shout
out
to
carlos,
so
I
believe
on
last
friday,
carlos
I
believe,
managed
to
deploy
this
sample.
The
mink
apply
node.js
sample
to
openshift
serverless,
using
openshift
pipelines
right
so
in
terms
of
the
server
side
bits
it's
it's
a
shrink-wrapped
set
of
things
but
like
there
are
no
additional
apis,
so
things
like
doing
the
build
packs.
C
It's
it's
really
just
creating
a
task
run
directly
that
invokes
the
appropriate
container
images
using
tecton,
apis
and
yeah
so
and
you
don't
have
to
deploy
k
native
services
with
it.
So
one
of
my
fun
things
to
beat
on
this
is
since
it
supports,
go
actually
part
of
the
reason
I
added
co
was
like.
I
wanted
to
make
sure
it
worked.
So
I
builds
mink
with
mink,
so
so
mink
builds
with
co,
but
then,
once
it's
deployed
you
can
then
use
mync
to
then
rebuild.
Mink
since
minkaply
supports
code
prefixes.
C
So
you
know
that's.
I
think
that
does
like
20
co-publish
builds
and
I
have
the
same
test
for
docker
files
too.
We're
at
self-host.
Mink
with
mink,
build
packs,
doesn't
quite
work
for
reasons,
and
it's
mostly
related
to
how
we
assume
co
has
laid
things
out
certain
ways
which
it's
hard
to
get
build
packs
to
lay
things
out
the
same
way
but
yeah.
F
Yeah,
maybe
I
will
ask
first
of
all
really
awesome
little
thing,
little
mink,
I
would
say
but
yeah,
but
let
me
rephrase
the
same
question
actually,
because
I'm
wondering
is
it
actually
a
toy
for
you
or
experiment,
or
do
you
want
to
support
that?
What
what
is
your
plan
with
that?
So
that's.
That's
a
good
question.
C
So
there's
a
there's
another
issue
right
now
to
sort
of
move
it
into
sandbox
and
sort
of
support
it
better.
So
it's
not
in
matt
moore
mink,
but
I
I
think
there's
some
interesting
questions
about
you
know
the
home
and
how
we
how
we
tie
some
of
it
together.
Oh
I
forgot
my
my
my
cheesiest
little
anecdote.
So
I
said
I
started
the
minx
cli
to
play
with
kn
plugins,
so
it
turns
out
k.
C
N,
I
m
is
mink
backwards,
so
you
can
also
invoke
it
all
through
k-n-I-m
but
yeah.
I
was
invoking
it
through
mink
out
of
out
of
habit
so
yeah.
So
so
there's
a
few
interesting
things
right.
So
things
like
mink
apply,
actually
don't
need
k-native
running
on
the
cluster
unless
you're
going
to
target
k-native
services,
it
just
happens
that
canadian
services
are
probably
one
of
the
best
ways.
C
In
my
opinion,
you
know
I'm
a
little
biased
to
run
the
sorts
of
functions
and
whatnot
that
you
know
come
out
of
stuff
like
build
packs.
So
you
know
that
that
might
even
make
more
sense
in
the
context
of
something
like
the
tecton
cli,
but
I
think
I
think
the
idea
of
having
sort
of
a
nice
getting
started
experience.
C
You
know
where
you
know
folks
can
very
easily
get
up
and
running
with
a
thing
and
have
things
that
start
to
tie
together
some
of
these
different
pieces
of
sort
of
user
experience.
That
really
you
know,
I
I
think
that
we
are
starting
to
realize
the
sort
of
potential
of
what
we
set
out
to
do
three
years
ago,
with
build
and
serving
and
eventing
to
sort
of
tight
start
to
tie
together
sort
of
function,
style
experiences
on
top
of
k-native.
C
So
I
I'm
super
stoked
to
see
this
and
I'm
I'm
interested
in
sort
of
discussing
how
we
make
it
real.
You
know
I've
been
starting
to
poke
at
the
topic
of
like
you
know,
should
we
be
talking
about
sort
of
standard,
build
packs
for
cloud
events,
since
a
lot
of
folks
have
jumped
onto
that
bandwagon,
then
that
conversation
may
actually.
C
You
know,
since
a
lot
of
folks
here
are
interested
in
that
it
may
actually
make
more
sense
to
have
that
conversation
in
the
context
of
something
like
the
build
pack
community
or
the
serverless
working
group,
but
but
yeah,
but
it's
something
that
could
be
leveraged
in
the
context
of
this
right.
Like
I,
this
is.
This
is
not
running
like
my
crazy
build
pack.
That
demo
was
using
the
gcp
build
packs
that
mink
uses
the
I
think
paquetto
build
packs
by
default.
C
I'm
testing
it
with
both
of
those
and
the
boson
build
packs
that
the
openshift
folks
have
been
doing,
but
you
know
they're
all
running
slightly
different
build
packs,
so
I'm
not
keeping
up
with
chat.
It's
there's
a
whole.
G
Sure
this
is
max,
so
I
guess
the
quick
question
is:
is
matt.
You
know
about
the
source
to
image
work,
that
red
hat
and
a
few
others
are
doing
right,
yeah
yeah,
so
I
guess
you
could
integrate
that
easily.
There
too
right
instead
of
kicking
off
tecton,
I
guess
they
they're
doing
the
work
to
you
know
build
back
like
convert
your
directory
into
an
image
yeah.
C
Yeah,
so
so
can
fast,
I'm
pretty
sure
is
using
pack
under
the
hood
locally
and
I'm
not
gonna
lie.
I
I
don't
have
docker
locally
and
I
don't
want
docker
locally.
So
I'm
a
big
fan
of
being
able
to
run
all
I've
been
reviewing
things
like
all
the
cloud
event
go
and
blah
blah
samples.
I
I
reviewed
all
those
pr's
by
like
cloning,
folks,
repos
and
then
you
know
minkville
build
the
build
pack
and
deploy
it
with
kn.
C
So
I
I
am
a
big
fan
of
not
having
to
run
docker
locally,
but
I
think
you
know
being
able
to
leverage
something
like
pac
for
local
as
well.
I
think
makes
a
lot
of
sense
too,
for
folks
who
you
know
are
into
that,
but.
C
But
yeah
so
like
I,
I
mean
here's,
here's
a
fun
one
right.
What
happens
when
folks
are
using
different
hardware
locally
right,
then
yeah.
H
C
Know
thank
thank
you
apple
for
that
m1
problem
right.
This
would
run
fine
on
that
because
I
don't
need
docker
locally.
I
it
stitches
together
the
build
context
and
everything
co-style
and
again
multi-arch.
So
yeah,
you
know,
assuming
you
know
the
actual
build
process
you're
invoking
can
run
on
that
cluster.
You
know
it.
It
just
works.
C
I
think
konica's
multi-arch
is
only
amd64
and
arm64
right
now,
but
it's
pretty
trivial
to
produce
to
add
architectures.
To
that.
I
think
the
the
main
thing
is
testing
the
the
co
image
that's
being
produced
is
based
on
the
golang
image,
so
it
has
so
many
architectures
and
then,
like
I
said,
with
build
packs.
The
the
main
thing
is,
I
just
don't
know
of
a
multi-arch
build
pack
to
play
with.
Otherwise
I
think
it
would
work
so.
Okay.
G
G
C
That's
a
good
question
right,
so
so
this
this
was
very
intentionally
created
outside
of
k
native.
When
I
first
started
it
because
you
know
I
I
view
upstream
is
very
unopinionated
right
and
you
know
we
don't
want
things
under
k
native
to
really
be.
You
know,
biasing
towards
you,
know
one
option
or
another
option
right.
So
with
some
of
the
clarity
around
things
like
sandbox
right,
you
know,
I
see
it
sort
of
like
you,
know,
kind
and
mini
cube.
C
I
I
fully
expect
that
folks
will
have
different
opinions
from
you
know.
What's
bundled
in
here
but,
like
I
said
it
started
as
a
way
of
sort
of
showing
folks
that
you
can
do
this,
and
you
know
a
lot
of
what
we
did
was
sort
of
designed
to
be
able
to
recombine
these
things
downstream.
So,
like
you
know,
if
you
know,
google
is
v
or
ibm
is
very
committed
to
saying
hey.
You
know.
We
know
that
the
distribution
we're
gonna
ship
is
always
gonna
have
istio
right.
C
They
can
basically
combine
those
things
together
and
not
have
to
run
additional
stuff
on
your
cluster.
You
know
if
they
can
be
combined
and
you
know,
shrink
the
footprint
ever
so
slightly.
It
turns
out
it's
not
that
slightly
when
you're
running
serving
and
ingress
and
tls
and
eventing
and
sugar
controller
in
the
multiplayer.
C
A
Thank
you
max.
I
think
america
has
other
questions
as
well.
D
Yeah,
I
think,
sort
of
along
the
same
lines.
I
guess
like
do
you
envision
this
or
something
like
it
to
be
sort
of
the
first
meaningful
interaction
that
new
folks
would
have
with
k
native
and
then
piggybacking
off
that?
If
your
answer
is
yes
like
have
you
tested
this
with,
like
mini
cube
so
like
I
could
like,
because
I'm
imagining
like
you,
would
basically
go
to
the
k
native
website
and
be
like?
Oh,
this
is
I'll.
Just
get
started
with
this,
like.
Is
that
the
idea.
C
So
I
I
don't
know
it's
a
it's
a
good
question.
I
mean
I,
I
think
it's
a
possibility,
but
you
know
I
think
it'll
depend
on
sort
of
what
other
distributions
crop
up
right.
So
we've
talked
about
sort
of
a
starter
distribution
in
the
past,
which
might
just
have
the
k
native
bits
right,
like
one
of
the
opinions
that
this
has
right,
as
it
pulls
in
tecton
right,
which
you
know
moved
out
of
the
house
a
couple
years
ago.
C
So
you
know
it's
it's
not
k-native
anymore,
but
so
it
may
be
that
we
want
that
getting
started.
Experience
to
be
focused
on
just
canada
fits,
but
you
know
I.
I
think
that,
ultimately
you
know
it's
it's
something
that
you
know
I
I
would
ask
the
community,
you
know
what
we
think
these
sort
of
first
experience
folks
should
have
is,
but
I
I
certainly
enjoy
it
and
I'm
trying
to
make
it
sort
of
simple
to
get
started
on
one
night.
So
you
you
asked
about
mini
cube.
C
I
haven't
tried
minicube,
but
I
have
a
whole
bunch
of
end-to-end
tests
with
this
running
against
kind,
on
github
actions,
which
is
awesome
and
free
and
runs
a
lot
of
testing
for
it.
And
so
you
know,
mink
install
is
bootstrapping.
You
know
the
little
kind
cluster
with
mink,
but
it
is,
you
know,
there's
a
whole
bunch
of
sort
of
kind
set
up
to
that.
C
I
think
it'd
be
interesting
to
play
around
with
having
something
that
enabled
you
to
do
something
like
kind
create
cluster
locally,
and
you
know
bootstrap
a
whole
k
native
environment
from
nothing
sort
of
excuse
me,
sort
of
like
meek,
install
but,
like
me,
create
me
a
whole
little
kind,
cluster
sort
of
thing,
but
I
don't
have
docker
locally.
So
it's
hard
to
develop
stuff
that
works
with
kind.
A
C
Co-Resolve
yeah,
you
can
use
mink
resolve
and
it
builds
with
co
or
docker
file
or
build
pack
and
gives
you
the
ammo.
So
if,
if
in
principle,
you
wanted
to
bootstrap
a
new
cluster
with
that
sample,
I
built
you
could
and
you
had
pre-built
it.
Oh,
you
can
also
use
this
to
separate
out
build
from
deploy,
which
is
another
key
thing
right.
C
You
can
also
do
that
with
the
composition.
If
you
pass
different,
build
contexts
to
or
sorry
cube
context
to,
mink
build
and
kn,
I
don't
know
that
I've
plumbed
it
through
mync,
so
I'm
hoping
kn
has
plumbed
that
through.
But
you
know
those
are
other
things
where
you
can
start
to
sort
of
split
that
split
the
build
and
deploy
clusters
apart,
but
yeah
mink
resolve
works
and
yeah.
A
E
I
think
it's
me
again
so
sorry
about
it,
and
that
is
just
a
when
I
hear
that
pack
make
everything
easier.
I
built
non-trivial
applications
with
buildback
and
I
fight
it
with
all
the
updates
to
build
like
libraries
and
whatnot.
So
I
wonder
what
is
your
you
know
experience
especially
about
like
you
know
you
deploy
something?
Oh
it's
not
working
because
buildback
was
changed.
C
Oh,
that's
a
very
good
question
and
you're
you're
asking
a
code
junkie.
So
I
I
love
co,
but
you
know
people
are
always
like,
but
I
don't
want
to
write
stuff
in
in
go
and
so
this
this
brings
co.
Co,
sort
of
experience
to
other
languages,
build
packs.
You
know,
I
think
yeah.
One
of
the
things
I've
struggled
with
too
is
repos
with
a
whole
bunch
of
stuff
building
out
of
the
same
repo.
C
But
you
know
I
think
it's
one
of
the
things
where
you
know
as
applications
shrink
and
you
have
microservices
and
functions.
You
aren't
going
to
want
a
whole
new
repo
for
every
function
that
you
build,
and
so
I
think
buildpacks
is
going
to
have
to
evolve
a
little
bit
to
be
able
to
produce
things
out
of
more
complex
repos
and
deal
with
some
of
that
complexity.
C
Better,
in
terms
of
you
know,
changing
libraries-
and
I
don't
know
if
your
comment
was
about
the
rebasing
stuff,
but
there
are
a
bunch
of
other
fancy
things
around
build
packs
that
you
know.
I
think
things
like
rebasing
rely
on
like
adi
guarantees
in
terms
of
the
same
build
not
working.
I
think
my
thought
would
be
that
if
you
use
the
same
builder,
I
would
expect
some
amount
of
determinism
there.
But
you
know,
if
you
use
a
tag,
then
it's
sort
of
yolo
mode.
C
But-
and
you
might
get
different
results
build
to
build,
but
yeah,
I
I
don't
have
a
ton
of
in-depth
experience
with
build
packs
to
really
comment
on
it.
Like.
A
C
Think
you're,
probably
thinking
of
like
the
cloud
foundry
experience
where.
E
Or
or
in
general,
if
you
push
non-trivial
code
into
somewhere,
possibly
multiple
locations
that
have
different
setup
like
one
is:
mink
another,
maybe
something
else.
How
do
I
know?
You
know
that
everything
works,
the
way
that
it's
supposed
to
unless
I
tested
it
with
exact
version
and
can
guarantee
that
in
every
location
it
will
be
running
exactly
the
same.
Yes
describe
containers.
C
I
think
that's
one
of
the
intents
of
things
like
co-resolve
and
mink
resolve
right
so
that
you
can,
basically
you
know,
mink
apply
and
co-apply
is
great
for
development
right,
because
I
can
build
and
deploy
my
entire
application
like
in
one
go
right,
but
it's
not
what
you
want
for
release
right.
You
want.
C
You
want
to
be
able
to
sort
of
build
the
thing
qualify,
the
thing
in
one
place
and
then
deploy
it
to
other
places
and
so
being
able
to
separate
those
is,
I
think,
very
intentional,
and
you
know
because
you're
doing
it
by
digest,
you
know
resolve
I'm.
I
am
a
big
digest,
kool-aid
person
so
everything's
by
digest.
So
it's
not
going
to
change
out
from
under
you
unless
you
go
into
the
yamls
and
you
change
it.
So
you
know
the
the
yaml
should
work
everywhere.
H
Feel
like
like
being
forced
into
position
defending
bill
packs,
which
is
never
something
I
want
to
be
doing,
but
the
cloud
native
build
packs.
The
more
modern
build
packs
do
result
in
container
images
right,
like
the
old
kind
of
build
packs,
were
pretty
opaque
and
you
sort
of
threw
your
code
in
there
and
hoped
it
still
worked
and
it
could
break
behind
you.
But
the
new
cloud
native
build
packs,
they're,
just
kind
of
a
different
way
of
doing
the
dockerfile
style
thing
right.
H
C
So
so
yeah,
so
this
is
all
deploying
everything
by
image.
Digests,
including
you
know
the
for
build
packs.
It's
using
the
cloud
native,
build
pack
stuff,
you
know
with
whatever
builder
you
specified
and
because
it's
by
digest
it
actually
won't
work
with
things
like
rebasing,
I
mean
in
the
same
way
that
you
know
serving
resolves
tags
to
digest
right.
F
I
was
wondering
matt
about
this.
Mink
is
funny
because
I
don't
know
if
you,
if
you
know
this
project
this
is
called
k3s.
It's
actually
kubernetes
built
into
one
binary.
So
I
thought
maybe
built
mink.
All
of
that.
It
took
a
k,
k
native,
all
kinetic,
builds
and
all
other
bits
into
another
binary
and
deploy
just
two
parts,
and
you
will
have
kubernetes.
Will
canada.
C
Yes,
so
I
mean,
I
would
say,
let's
see
I
I
would
say
it's
somewhat
similar
to
my
rough
understanding
of
that
right.
K3S
is
building
everything
into
a
single
binary.
Mink
is
effectively
doing
that
for
all
of
these
sort
of
processes
that
are
sort
of
compatible
right.
So
the
way
we've
designed
the
controller
architecture
right,
it's
all
using
shared
main
and
it
sort
of
figures
out
things
like
the
informers
and
whatnot.
It
needs
to
start
out
so
mink.
C
The
code
for
mink
before
I
added
the
cli
was
well
was
basically
just
main.gov
and
it
linked
in
a
bunch
of
different
entry
points
that
it
wanted
to
run
as
a
single
controller
process
and
then
a
different
config
that
sort
of
oriented
things
in
various
ways.
So,
like
you
know,
running
the
various
data
plans
is
site
well
different
containers
in
the
same
sort
of
data
plane,
pod
et
cetera,
but
the
way
mink
is
combining
all
of
those
controller
processes
in
the
control
plane.
C
Pod
is
similar
in
principle
to
the
idea
of
just
link
it
all
together
into
one
binary.
I
don't
think
we
want
to
do
that
for
everything,
because
I
would
really
love
it
if
our
q,
proxy
sidecar,
for
instance,
went
on
a
diet,
but
you
know
it's
it's
doing
something
sort
of
like
that
in
in
principle,
but
I
I'm
also
sort
of
opposed
to
the
idea
that
you
need
to
combine
everything
into
a
single
container
in
order
to
get
that
sort
of
tiny
footprint.
C
Since
you
know,
I
think
I
think
the
the
direction
that
things
are
headed
in
terms
of
sort
of
modern
application
development.
Is
you
stop
thinking
about?
You
know
the
fact
that
you
have
n
containers
right
containers
are
an
implementation
detail
right,
like
I
I
you
know,
I'm
a
total
container
image
nerd
and
the
goal
of
co
was
to
get
people
to
stop
thinking
about
them.
So
hopefully
you
know
with
things
like
being
able
to
do
this
with
buildpack.
C
Well,
unfortunately,
dockerfile
is
probably
not
going
to
help
with
that,
but
with
things
like
buildpack,
hopefully
you
can
have
a
higher
level
source
oriented
experience
where
people
just
stop
worrying
about
how
it
goes
into
the
container,
and
you
know
just
works
tm,
but.
C
Just
going
to
say
like
I,
you,
if
you
get
me
talking
I'll
talk
about
this
all
day,
so
feel
free
to
reach
out
to
me.
If
you
want
to
know
more
about
this,
there's
a
lot
going
on
on
this.
It's
something
I've
sort
of
been
poking
at
intermittently
for
better
part
of
a
year
now,
so
I
think
it
was
actually
ben
browning
asking
about
some
of
our
controller
stuff
that
prompted
me
to
put
it
together,
but
it's
sort
of
slowly
grown
over
time
and
so
yeah
and.
A
A
If
you
are
eligible,
please
remember
to
vote
for
the
a
creative
steering
committee
election.
A
I
Unmute,
yes,
yes,
I
just
wanted
to
let
everybody
know
we're
going
to
have
an
office
hours
thing
that
honestly,
it's
not
clear
to
me
whether
it's
openshift
tv
or
openshift,
commons
or
kubecon
or
whatever,
but
it
will
begin
in.
A
Thank
you
paul,
and
we
have
five
minutes
left
because
we
had
a
really
good,
interesting
conversation.
So
will
people
like
to
go
into
small
rooms
or
should
we
give
everybody
five
minutes
back?
Can
I
see
like
some
thumbs
up?