►
From YouTube: 2021.5.14 Cloud Native SIG: Tekton Client Plugin discussion after presentation at on-line meetup
Description
Presentation of Tekton Client Plugin here:
This SIG meeting took place 2 hours after and was a natural follow on discussion, including:
Future use of Tekton Client plugin with CloudEvents plugin
Improving debugging of Tekton Client plugin
A
Hi
welcome
to
the
jenkins
cloud
native
sync.
This
morning
we
had
an
excellent
meetup
and
demo
and
and
discussion
on
the
tekton
client
plug-in
by
vibhav
and
gareth.
So
that
was
really
amazing.
That
will
be
online
likely
very
soon,
if
not
already,
on
the
jenkins
youtube
channels,
and
this
can
just
be
a
follow-up
discussion
as
well
as
a
discussion
of
some
things
that
were
mentioned
in
that
meet
up
like
mink,
which
I
have
many
more
questions
on
and
and
by
all
means.
Please
ask
any
questions.
A
You
have
about
any
of
any
any
cloud
native
jenkins
topics,
but
especially
anything
that
we've
been
discussing
recently.
They
say
like
around
cloud
events
for
the
chat,
client
planning.
B
I'm
actually
just
looking
at
mink
right
now,
so
you
asked
like
how
we
could
probably
use
this.
It
seems
like
it's
a
controller
of
its
own,
so
I'll
just
share
the
link
with
you
guys
right
now.
B
I
don't
know
I
don't
know
how
it
could
be
used
exactly
because
I
like
what
the
only
thing
I
know
about
it
is
what
james
stratton
told,
which
was
mink,
was
able
to
build
multiple
images
in
barrel,
so
yeah
they're.
A
It's
just
interesting
because
they're
quite
ambitious
in
their
on
their
repo.
They
said
the
goal
of
mink
is
to
form
a
complete
foundation
for
modern
application,
development
which
is
simple
to
install
and
to
get
started
with.
That
would
be
fantastic
and
I'm
just
wondering
how
how
that
so
complete
foundation
for
modern
application
development.
So
is
that
the
are
they
do?
They
mean
the
entire
application
life
cycle.
A
C
B
Yeah,
it
seems
like
it's
yeah,
it
seems
like
it
uses
canadian
tecton
to
do
what
it
does.
I'm
still.
C
To
the
tkn
cli
in
terms
of
what
you
can
do
with.
C
A
It's
very
cool,
I'm
sorry,
I
kind
of
debated
already
at
the
beginning
of
this
discussion
on
on
this.
I
just
had
I,
similarly
I've
never
heard
of
it
before
james
stricken
mentioned
it.
Although
I
looked
at
the
link
they
put
for
the
demo
and
actually
got
a
bookmark
and
james
rawlings
is
in
there,
I
could
see.
I
can
see
his
image
on
the
like
he's
sitting
there
watching
the
presentation,
sir.
B
We
could
spend
some
time
looking
at
what
mink
is
and
see
how
it
like
integrates
with
our
stories
with
tecton.
C
I
think
some,
some,
like
good
demos,
would
be
really
handy
with
that
or
like
repost
that
you
can
clone
to
play
with.
I.
I
was
thinking
about
doing
one
for
sort
of
like
helm
file
with
like,
like
that
ci
jenkins,
the
io
repo
that
I
have
but
cut
down,
so
get
rid
of
all
the
stuff
that
we
don't
need
and
just
have
so
avoid
using
a
custom
image.
But
just
have
like
the
bare
bones
kind
of
stuff
that
you
would
need
to
run.
C
Tekton
and
jenkins.
Together,
I
think,
would
be
really
good,
including
the
workload
or
the
other,
the
role
binding.
That
would
be
nice,
but
then,
even
once,
that's
there,
but
having
some
like
example,
projects
that
you
could
import
so
like
this
is
a
you
know,
repo
with
the
jenkins
file
and
and
some
techton
resources.
That
would
be
great,
but
then,
like
the
more,
I
suppose,
more
advanced
stuff
so
stuff
using
the
user's
syntax.
C
C
It
comes
from
jenkins
x,
it's
oh
okay,
it's
the
it's
like
another
pipeline
inside.
It's
the
jx,
effective
pipeline
stuff
that
manipulates
the
pipeline.
When
you,
when
you,
when
you
enable
it
and
can
read
a
pipeline
from
another
place
which
it
does
it
kind
of,
it
would
work
out
of
the
box
on
a
jenkins
and
protect
on
sort
of
insulation.
C
But
if
you
want
to
reuse
the
jx
pipelines,
you
actually
need
a
jx
installation,
that's
there,
because
it
assumes
that
all
your
service
accounts
are
set
up
in
the
right
way
and
that
you
have
the
right
crds
in
the
cluster
so
that
it
can
get
like
the
the
correct
scm
connection
and
all
that
kind
of
stuff.
So
whilst
there
it's
just
it's
really
it's
just
a
way
of
controlling
at
the
moment,
it's
a
way
of
controlling
sort
of
jenkins
x
pipelines
from
jenkins.
C
So
it
works
really
good
with
that.
But
if
you
want
to
use
it
in
a.
C
A
Nice,
I
I
agree
with
your
first
point
too,
that
the
importance
of
having
sort
of
example
example
not
quite
tutorials,
but
example,
little
demos
for
people
to
spin
up
themselves,
because
I
think
any
time
you're
asking
for
people's
engagement,
it's
nice
to
make
it
super
easy
for
them
to
experience
it
and
see
it.
And
then
hopefully
they
can
get
more
involved
too.
B
Oh
yeah
yeah.
I
gave
it
off
on
that
but
service
vincent
and
I
who
gave
the
talk,
but
it
was
about
a
new
feature
in
tecton
with
which
you'll
be
able
to
stop
an
execution
of
a
pipe
a
task
run,
and
you
should
be
able
to
kind
of
hijack
a
container
and
then
do
some
things
over
there
and
figure
out
why
stuff
went
wrong
and
then
that's
that's
the
kind
of
debug
it
was.
C
Yeah
so
quite
often
you'll
get
a
you
might
get
a
task
run,
that's
been
created
and
for
some
reason
the
pod
can't
be
created
from
that
so
you're
trying
to
mount
a
workspace
or
a
shared
volume,
or
something
or
there's
a
secret-
that's
referred
into
there,
but
it
it
so
like
at
the
moment
the
feedback
into
jenkins
isn't
brilliant.
C
It's
like
it's
quite
good,
but
quite
often
you
just
get
a
null
pointer
or
you
just
get
an
exception.
That's
thrown
that
says.
I
can't
find
this
pod.
What
we're
not
doing
is
we're
not
displaying
the
status
coming
back
from
the
task
run
that
actually
failed
inside
you
know
and
then
linking
it
through
to
the
pipeline.
C
B
Does
it
make
sense
to
do
some
pre-checks
like
like
pass
me
or
pass
whatever
pipeline
run
task
run?
We
are
using
and
like
extract
all
all
the
volumes
that
need
that
are
mentioned
there
and
then
just
query
them
once
before
calling
the
tecton
reading
the
technology
could
be
like
a
step
we
do
before
actually
creating,
and
at
that
point
it
can
fail
before
actually
creating
the
resource.
By
saying
that
this
resource
was
mentioned
in
this
task
run,
but
it's
not
available
on
the
cluster
aborting
or
execution.
C
C
So
if
the
pipeline
run
fails
to
create
it's,
it's
got
one
of
those
or
they
like
admission
web
hook,
type
things
that
validates
it
and
then
checks
it's
all
there.
C
So
you
will
get
that
feedback
and
you
should
get
the
status
and
the
reason
from
this
from
actual
state
fields
back
into
back
into
the
dragon's
console.
C
A
Would
you,
or
either
of
you
like
to
do
a
demo,
a
walkthrough
of
how
to
debug
this.
C
C
The
other
piece
I
noticed
I
was,
I
was
running
some
of
these
pipelines,
the
other
night
from
my
airbnb,
with
a
very
slow
internet
connection,
and
they
were
taking
the
first
time
you
pull
an
image
down.
They
were
taking
like
more
than
60
seconds.
C
C
A
How
would
you,
how
would
you
go
about
looking
at
improving
that
logging?
Do
you
want
to
talk
about
it?
What.
C
Would
be,
I
I
think
at
the
moment,
what
we're
doing
is
we?
I
don't.
I
actually
don't
so
I've.
Only
my
test
cases
that
I've
been
playing
with
have
only
really
been
around
pipeline
runs.
I
haven't
done.
Any
sort
of
task
runs
directly,
so
it
may
be
a
bit
different,
but
if
we're
we
spin
off,
we
get
a
pipeline
run
and
then
we
get
the
we
kind
of
get
uid
from
that.
That
comes
back
with
and
then
we're
looking
for
task
runs
that
have
been
created
by
that.
C
I
think
it's
the
correct
way
where
the
owner
reference
is
set
to
the
right
thing,
and
then
we
loop
through
those
and
try
to
discover
a
pod
for
each
of
those
and
then
kind
of
wait
for
it
to
start,
and
I
think
we
just
need
some,
which
I
think
is
probably
the
right
way
to
do
it,
although
it
may
be
worth
looking
at
the
logic
that
the
tkn
client
uses,
because
that
actually
seems
to
follow
it
quite
nicely
like
it.
C
Pauses
wait
for
the
next
one
to
start
and
even
the
way
that
it
you
know,
prefixes
the
the
name
of
the
kind
of
like
the
task
run
and
the
I
think
it's
on
the
container,
or
it
might
be
the
task
task
in
the
pipeline
or
something
before
you
in
the
log
output.
So
you
can
see
exactly
where
things
are
coming
from.
Maybe
we
should
use
like
a
similar,
like
lock
format,
for
that.
C
B
Another
I
I
lost
you
about
the
control
switching
part.
This
is
about
switching
control
from
jenkins
to
tecton.
C
No
sorry
so
this
is
just
this
is
for
the
so
the
way
that
we're
streaming
the
logs
to
the
console
at
the
moment
to
the
jenkins
console
at
the
moment.
C
C
C
It's
the
no
task
runs
have
been
created
for
that
point,
so
I
think
it's
just
tweaking
the
the
kind
of
looping
logic
that
we
have
there
and
then
the
other
part
of
that
was
when
we
go
about
writing
to
the
the
jenkins
log
to
prefix
the
log
statements
with
the
sort
of
same
information
that
the
tkn
client
does,
because
that's
a
really
nice
like
simple,
you
can
see
it's
like
it's
gone
from
this
part
to
this
one
to
this
one,
to
this
one
done
great
right.
C
C
B
It
what
would
be
nice
is
like
once
it
shows
tecton.
Maybe
something
like
like
you
said
how
tking
cli
does
it
like?
It
shows
the
container
or
not
the
step
name
and
then
the
log
itself
yeah?
Is
it
a
task,
and
then
the
step
name
is
that
is
that
what
it
shows
it
shows
it
shows
the
step
name
it
yeah.
It
just
shows
the
step
name
in
in
a
square
bracket
or
the,
but
when
you
want
to
check
the
container,
the
container
is
named
as
step
dash.
B
I
mean
step
name
dash
step,
that's
how
the
container
is
named.
Yeah
right
now
we
are
using
owner
references
directly
like
we
are
checking
if
there
is
an
honor
reference
for
some
called
and
if
it
has
the
taskbar
normal
reference
and
based
on
that,
we
are
pulling
right,
yeah
we're
not
checking
for
status,
ready
right
now.
That
needs
to.
C
A
C
Unit
tests
right
there
will
be
they
can't
they're
kind
of
end-to-end
tests
yeah,
although
they
do
run
as
part
of
the
junit
normal
unit
test
phase
within
maven,
because
it's
it's
using
that
it's
using
the
fabricate
mock
kubernetes
stuff
to
do
it.
B
I've
noticed
one
thing
that
in
the
fabricate
objects
that
we
sorry
fabricate,
I
don't
know
the
other
classes,
they
don't
have
all
the
all.
The
parameters
given
for
a
cr,
like
some
of
the
newer
parameters,
are
like
missing.
B
Okay,
so
that's
that's.
Why
creating
that's
what
that
was
like
one
of
the
reasons
also
why
we
drop
the
custom
or
create
custom
task
that
we
had
before,
because
and
it
seemed
like
it's
a
better
choice
to
just
do
yaml
at
that
point,
because
the
yarn
is
directly.
A
C
I'm
assuming
that
that
that
fabricate
test,
client
or
whatever
is
generated
rather
than
handcrafted,
is
that
the
case
do
you
know.
B
The
test
plant
or
the
main
library.
C
As
well,
so
I
suppose
both
really
the
main
library
with
yeah
that
well,
that's,
I
suppose,
there's
the
techton
there's
the
yeah,
techton
client,
sort
of
main
library
thing
as
well.
B
The
crds,
especially
I'm,
not
sure
if
they
are,
I
mean,
let
me
just
I'm
going
to
check
that
see
if
we
have
something
on
that
side.
C
A
Yeah
we
everything-
I
guess
my
my
main
update
would
be.
Everything
is
going
extremely
well
for
you
thoughts
and
we
will
almost
certainly
have
a
rough.
A
C
And
I
think
that
the
cloud
events,
whatever
we
choose
to
do
with
cloud
events-
integration
with
the
tectonic
client
stuff-
will
be
ridiculed
as
well.
That
could
be
very
that
can
be
really
interesting.
Like
a
pipeline,
you
know
triggering
pipelines
or
just
just
even
getting
notification
that
multiple
stuff
has
run
like
a
deployment
or
whatever
is
taking
place
because
yeah
in
the
tecton
logs
you
get
all
of
the
cloud
event
stuff
in
there
already,
so
you
can
see
it,
it
wants
to
look.
A
Do
you
think
that
am
I
okay?
Do
you
think
that
it
would
just
be
a
matter
of
installing
both
plugins
and
we'll
set
them
up
to
work
together,
or
will
this
evolve
into
something
that
naturally
you
would
you
would
use
them
always
together,
so
we
would
make
them
one.
I'm
just
curious.
C
I
think
I
think
you'd
probably
keep
them
they'll,
probably
be
separate.
Plug-Ins
yeah,
I
suppose
in
terms
of
how
it
works.
Yeah.
A
C
I
I
think
the
difficulty
with
cloud
events
is
gonna,
be
when
like
when
you've
got
like
two
things.
You
want
to
integrate,
that's
really
straightforward,
but
then,
as
soon
as
you
start
having
more
than
two
and
are
you
configuring
things
on
like
a
point-to-point
communication
channel
rather
than
like?
I
want
to
like,
have
something
that
you
can
broadcast
events
to
and
then
multiple
things
get
those
feeds
that
may
be
worth
it.
I
know
we
used.
It's
not
really.
C
C
C
So
it's
and
it's
a
bit
like
you'd,
have
to-
and
I
understand
why
it's
a
quite
difficult
thing
to
do
like
you
know
if
you've
got
like
10
or
20
or
30
different
endpoints,
that
you're
trying
to
relay
to
remembering
which
ones
have
accepted,
which
messages
and
which
ones
you
need
to
store
up
for
who-
and
I
have
like
different
retry
lists.
Basically,
it's
at
scale
it's
it
can
be.
It
could
be
a
lot
of
data.
D
B
A
very
this
problem
is
going
to
be
the
main
problem
solving
when
it
comes
to
cloud
events
because,
like
like
the
initial
stuff,
is
pretty
easy
like
if
you
want
to
implement
the
http
request
stuff
like
in
through
which
cloud
events
are
gonna
go
through.
That's
that
stuff
is
easy,
but
how
it
will
work
with,
like
you
know,
scalable
infrastructure,
that's
going
to
be
a
lot
of
setting
up
stuff
and
trying
out
a
lot
of
different
things.
C
A
What
has,
if
either
of
you
know
what
has
the
cloud
events
say
in
the
cncf?
Have
they
been?
I
would
imagine
they're
thinking
about
this
problem
and
do
they
have
any
forming
best
practices
or.
B
Particularly,
do
you
have,
I
think
you
know
this,
but
they
do
have
a
best
practices
said.
I
haven't
joined
it,
but
it
will
be.
I
think
it's
high
time
probably
should
join
at
least
one
of
us.
I
did
join
the
event
state
last
week
or
was
it
this
week.
A
Oh,
I
meant
I
didn't
mean
that
I
probably
said
that
wrong.
I
didn't
mean
the
cdf.
I
meant
the
same
staffs,
but
yes.
B
About
another
meeting
yeah,
but
it
makes
sense
to
think
about
like
the
best
practices
around
them
and
see
like
how
people
are
already
doing
it.
B
It's
it's
probably
something
we'll
have
to
look
into
in
background,
but
I
so
I
had
joined
so
you
were
also
there.
I
joined
the
event,
stick
for
cdf
and
though
there's
let's
work
on
building
a
mini
prototype
for
how
cloud
events
will
work
and
I
think
our
feeling
slowly,
the
prototype
might
as
the
discussion
around
the
prototype
begins.
There
might
be
also
discussion
around
how
this
could
be
scaled
and
the
discussion
kind
of
already
had
started
when
four
keys
was
being
discussed
because
like
how?
B
How
can
we
manage
so
many
requests
at
one
time,
and
it
was
probably
easy
to
do
it
through
bigquery,
but
it
might
not
be
as
easy
to
implement
something
that
can
do
something
similar
on
a
smaller
scale
but
yeah
it's
it's.
It's
gonna
going
to
be
a
pro
like
something
to
think
about
in
in
the
next
few
weeks.
A
What
what
are
your
best
resources
for
considering
this
problem
like?
Are
there
any
go-to
learning
resources
that
you
prefer.
A
A
A
B
Doing
that,
I'm
hoping
that
you
know
I
get
more
used
to
eventing
architecture
itself,
so
I'm
looking
forward
to
the
weekend,
that's
why
so
I
can
just
open
uc.
A
B
A
Yeah,
I
mean
for
me
the
the
place
or
the
individual
I
go
to
to
try
and
understand.
More
is
my
go-to,
for
that
is
the
work
of
martin
flatman,
because
yeah
he
just
explains
things
very
well,
but
even
with
that,
it's
it's
so
broad
and
there's
so
many
possibilities
that
yeah
it's
it's.
It
is
really
it's
really
hard
determining
like
what
would
be
the
best
practices
for
a
given
scenario,
or
can
we
say
generalized
best
practices
and
it
feels
like
a
very
quickly
evolving
space
which
is
exciting
and
then
it.
A
It
also
feels
nice
that
way,
because
you're
realizing
that
a
lot
of
people
are
still
trying
to
figure
this
out.
So
you
know
there's
that
sense
of
being
like
okay,
we
all
are
kind
of
working
together
to
try
and
find
out
what
would
what
would
actually
in
practice,
be
more
scalable,
most
reliable
things
like
that.
C
C
It
doesn't
really
matter
like
optimizations
of
well,
it's
not
going
to
make
any
difference
or
like
it
like
you
might
save
a
second
or
so,
but
as
soon
as
your
data
sets
start
to
get
bigger,
it
becomes
like
really
difficult
like
like
you
when
you
can't
process
it
on
a
single
machine.
It's
like
it's
a
problem
like
like,
even
just
like
the
jenkins
open
source
statistics
like,
but
you
can't
process
it
on
a
like
a
brand
new
macbook
pro.
It's
too
much
data
right.
C
So
you
have
to
treat
like
how
you
handle
the
data
and
how
you
process
it
in
a
different
way
and
that's
when
it
all
becomes
important.
A
And
I
guess
something
like
not
that
we're
using
it
for
that,
but
bigquery
would
be
really
great
for
prices
and
launch
watch
batches
yeah.
C
It
it
it
is
really
good
for
that
kind
of
stuff.
It
can
be
very
expensive
if
yeah
yeah
they
have,
they
have
a
like.
Quite
a
nice
charging
model
where
you
can
go,
and
you
know
in
mysql,
you
can
go
and
have
a
look
at
like
I
can
go
and
do
an
explain
on
a
query
and
and
see
if
it's
correctly
using
the
right
indexes
and
stuff,
but
generally
it
doesn't
really
matter,
but
you
do
the
same
in
bigquery.
C
B
Are
we
talking
about
bigquery?
I
lost
you
guys
on
which
yeah
yeah,
so
that's
what
that's?
What
four
keys
is
based
on
like
the
demo
that
I've
never
used
bigquery
itself,
but
is
it
it's
a
data
processing
toolkit.
A
B
B
So
when
we
talk
about
like
cloud
events,
integration
with
tecton
this,
I
was
initially
thinking
that
the
we
could
probably
start
with
like
a
event
listener
thing
and
then,
as
the
as
the
cloud
event.
So
plugin
project
goes
forward.
We
could
think
of
how
to
integrate
it,
but
I
think
a
good
place
would
be
to
start
with
you
know,
being
able
to
just
create
the
event
listener
in
tecton
or
when
you
have
mentioned
that
gareth
in
the
or
what's
next
part
like,
were
you
thinking
of
something
similar.
C
Yeah
I
mean
I
was.
I
was
looking
at
the
fact
that,
because
techton
is
kind
of
it
has
these
events
inside
it,
you
know
internally
anyway,
it
would
be
really
nice
to
try
and
get
them
out
of
there
like
try
and
either
send
them
to
something
or
I,
I
think,
yeah
try
to
try
to
get
them.
Yeah
get
them
to
get
the
events
out
into
something
that
can
process
them
will
be
like
probably
initial,
a
good
initial
step.
C
It
would
be.
It
would
create
quite
an
interesting,
like
I
suppose,
once
the
cloud
events
plug-in
for
jenkins
is
there
you've
got
quite
an
interesting.
You
know
circular
thing
going
on
where
they
could
be.
C
You
know
jobs
completing
inside
tecton
that
could
trigger
other
things,
could
trigger
jobs
and
jenkins
that
then
trigger
more
jobs
in
tecton
and.
B
I
just
I
just
keep
getting
confused
between
if
we
should
use
the
webhook
the
book
handlers
given
by
jenkins
or
like
at
what
point?
Should
we
be
able
to
tell
tell
the
user
to
switch
like
use.
A
B
That
you
already
got
for
your
jenkins
job,
which
is
there
like
they
can
trigger
trigger
it,
using
that
or
like
use
the
one
given
by
tecton
like
when.
Do
we,
probably
probably
it
could
be
like
a
chain
trigger
jenkins
job,
which
then
triggers.
C
May
be
interested
in
like
different
things
as
well,
because
I
think
there's
a
nice
feature
in
the
helm
chart
for
jenkins,
where
you
can
create
the
you
can
have
like
jenkins
as
being
private,
but
have
a
secondary
ingress
rule
set
up
automatically
for
the
webpack
handler.
C
So
the
you
know,
you're
running
you're
running
your
cluster
on
a
private
network
that
you
that
you
have
to
have
a
vpn
or
something
to
get
to
it.
But
the
only
thing
that's
publicly
available
is
the
webhook
and
that
that's
quite
a
nice
way
of
doing
things
for
this
kind
of
stuff.
C
We
actually
use
it
on
on
the
jenkins
infra
to
do
this.
This
exact
thing
where
we
hide
hide
clusters
behind
a
vpn,
but
it's
the
only
thing
that
is
exposed
is
the
the
secondary
ingress
and
because
you
get.
A
C
C
I
mean
you
can't
you
can't
even
have
the
different
ingress
rules.
You
could
have
different
certificate
authorities
for
each
of
them,
which
may
be
a.
B
So,
to
clarify
this
is
the
ingress
used
for
the
web.
C
Yeah,
it's
turned
off
by
default,
but
when
you,
when
you
enable
it
it
creates
yeah,
it
creates
a
secondary
ingress
to
root
that
through.
B
C
I
I
I
was
just
gonna
say
that,
as
long
as
the
like,
the
annotations
and
labels
and
stuff
are
exposed
as
well
on
the
ingresses,
you
can
do
a
loads
of
clever
things
with
them
as
well
like
if
you
wanted
to
put
this
behind,
like,
I
suppose,
any
kind
of
you
know
service
mesh
or
something
like
that.
You
could
do
that.
A
Question,
actually
is
what
do
you
have?
What
have
you
heard
about
and
what
you
know
about?
What
do
you
think
about
the
secret
store,
csi
driver,
that's
sort
of,
I
think
it's,
I'm
not
sure
it's
actually
in
kubernetes,
yet
it
might
still
be
under
development,
and
so
I
don't
really
know
very
much
about
it.
But
I
was
super
interested
in
that.
C
So
this
is
the
way
that
kubernetes
stores
its
secrets
internally.
B
A
A
Good
secret
store
csi
driver
because
they,
I
don't
know
if
they're
extending
then
the
functionality
they
are
talking
about
it
as
if
I
can
help
consume
external
secrets-
and
I
don't
know
if
they're
beginning
to
extend
the
functionality
into
something
more
like
what
godaddies
like
one
of
these
external
secrets
does,
or
I
I
just
it
was
different
than
I've
heard
it
talked
about
before,
and
I
wondered
what
you
knew
about
that
and
thought
about.
It.
C
So
the
the
external
secret
stuff
is,
I
think,
of
that
as
a
way
of
replicating
secrets
into
kubernetes
right.
So
I
want
to
take.
I
want
to
store
them
externally
from
kubernetes,
and
I
want
to
get
them
in
and
and
use
them,
but
once
they're
in
kubernetes,
they're
protected
by
standard
kubernetes
are
back.
D
C
And
you
know
roll
control
and
just
it's
there
is
no
real
encryption.
Maybe
they
are
encrypted
at
the
disk
level.
But
there
isn't
really
it's
it's.
You
know
base64
encoded
stuff,
whereas
the
I
think
the
secret
that
the
thing
you're
referring
to
like
the
csi
driver
thing,
is
a
way
of
actually
like
storing,
rather
than
storing
them
like
in
that
way,
they
store
them
in
a
like,
maybe
back
them
off
into
something
like
a
kms
or
a
vault
style
solution
and
store
those
secrets
in
there
which
might
yeah.
That
might
be
an
option.
A
C
The
the
bit
that
always
kind
of
interests
me
about
these
is
that:
how
does
it
work
is
how
how
would
this
work
in
a
cloud?
You
know
kubernetes
provider,
so
I'm
guessing
that
gke
would
enable
google
secrets
manager
somehow
as
it's
back
in
store
or
maybe
the
kms
stuff,
actually,
because
that
could
be.
That
could
be
another
option
because
they
have
got
like
cloud
kms.
C
B
No,
but
I
learned
something
new
today
so
because,
because
for
some
time
I
was
thinking
like
I,
and
now
it
like
makes,
makes
a
lot
of
sense
why
you
would
set
up
kms,
google's
gamers.
With
with
this
I
was
thinking
before,
like
I
have
some
secrets
for
like
a
thing
I'm
working
on
in
in
gk
like
not
on
kms
but
like
how
would
I
you
know,
use
them
on
gk,
there's
so
much.
One
of
my
friends
asked
me
I
was.
I
just
had
no
idea.
C
So
I
do
this
quite
I
do
this
quite
a
bit
actually,
and
I
had
to
do
it
the
other
week
when
I
I
broke
my
cluster
and
then
had
to
recreate
it
with
everything.
C
But
I
I
use
google
secrets
manager
to
put
the
secrets
in
and
then
deploy
kubernetes
external
secrets
to
to
configure
it
to
replicate
those
secrets
down.
I'm
providing
your
you've.
You've
got
workload,
identity
enabled
and
you
have
a
a
service
account
in
there.
That
has
got
permission
to
read
a
secret.
It
just
replicates
down,
and
so
it
appears
as
a
kubernetes
secret
and
each
time
you
go
in
and
it
polls
as
well.
C
So,
each
time
you
go
in
and
update
it
through
the
ui
it
it
will
replicate
down
into
the
cluster,
which
is
which
is
really
nice
and
obviously,
if
it's
mounted
to
a
volume,
it's
going
to
get
an
update
that
the
secret
has
updated.
So
it
knows
that
it's
changed,
so
you'll
get
all
that
kind
of
stuff.
C
If
you're
doing
it
with
jenkins,
you
can
there's
a
kubernetes
credentials,
provider
plugin.
That
will
take
the
secrets
that
external
secrets
has
created
and
replicate
them
into
credentials
within
jenkins,
which
is
really
nice.
If
you
want
to
that's
what
I
do
with
my
my
docker
token
and
those
kind
of
things,
it's
like
store
them
into
store
them
in
google
secrets
manager,
replicate
them
into
kubernetes
secrets,
and
then
they
go
into
jenkins.
That
way,
which
means
that
out
of
the
config
like
it
doesn't
it's
not
in
the
gcas
config,
I
don't
need
stops.
B
We
use
this
with
openshift
sync
plugin
as
well,
so
the
secrets
are
synced
as
kubernetes
credentials
and
those
credentials
are
used
for,
like
whatever
is
being
done
by
sync
plugin
or
client.
C
I
mean
that
would
be
the
easiest
thing.
It
would
be
just
be
to
do
that,
but
it's
pretty
insecure,
because
they're
just
available
in
plain
text,
then.
C
And
they're
probably
not
going
to
be
masked
in
the
ui,
but
you
yeah,
we
probably
do
need
a
method
of.
Certainly
if
we're
running,
if
you're
running
that
to
create
raw
step
to
ask
or
whatever
insider
with
credentials
block.
That
could
be
quite
a
cool
use
case.
For
that
that'd
be
nice
like.
How
would
you,
then
you
set
up
the
credential
or
the
secret
in
a
way
that
can
be
passed
down.
B
I
I
didn't
get
you,
what
would
be
very
cool.
C
So
like
in
jenkins
pipeline,
you
have
you,
have
this
with
credentials
block
that
you
can
use
where
you
can.
You
can
basically
load
a
credential
if
it's
a
short-lived
credential,
it
will
request
a
new,
a
new
kind
of
version.
Of
that.
C
A
good
example
is
the
github
thing,
so
you
get
a
short-lived
token,
and
then
it's
passed
it's
available
as
as
environment
variables
to
whatever
the
steps
are
you
run
inside
that
block,
so
we
would
want
to
have
a
bit
of
a
story
about
how
we,
how
we
would
use
with
credentials
and
then
put
the
text
on
create
raw
inside
what
that
actually
does?
C
Yes,
do
we
pick
up?
Do
we
know
that
we've
loaded
a
secret
or
how
do
we?
How
would
we
reference
a
a
credential
that
we
would
need
to
pass
to
it.
B
In
like
months
ago,
when
I
was
just
starting
off
with
the
plugin,
I
had
created
a
story.
I
don't
know
what
I
was
thinking,
but
I
just
wrote
add
support
for
kubernetes
credentials,
provider.
B
But
I
was
thinking
if
we
could,
you
know
kind
of,
have
add
some
kind
of
help
with
the
service
account
stuff
or
like
there
are
certain
pipelines.
Only
certain
people
can
execute
this
thing
in
those
storms
and
what
kind
of
map
back
to
kubernetes.
C
B
C
I
don't
think
there's
any
way
of
like
doing
it,
the
other
way
so
taking
a
credential
from
jenkins
and
making
it
available
in
kubernetes,
but
that
would
that
would
certainly
be
a
good
kind
of
use
case
for
that.
B
Yeah,
would
you
be
able
to
update
the
issue
if
you,
oh,
if
you
end
up
doing
the
research,
I
just
shared
it
with
you.
C
C
A
Yeah
is
it's
just
thinking?
It's
been
a.
It's
been
a
good
good
chat
today,
any
other
questions
or
topics
to
discuss
for
today.