►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Cool,
hey
everyone
welcome
to
the
harvard
community,
I
hope
you're
doing
really
well
so
the
agenda
for
today
is
I've,
invited
one
of
our
community
members
cameron
cameron
boy
to
come
out
here
and
talk
about
the
work
that
he
did
for
mutating
web
hook
to
complement
the
current
proxy
cache
and
then,
if
we
have
time,
I
think
duncan
can
also
talk
about
the
prometheus
integration
that
we've
been
working
on
and
then
I
just
want
to
say.
Last
week
we
had
a
great
tgik
session.
I
was
focused
on
harper.
A
If
you
haven't
seen
djik,
it's
basically
a
session
where
two
developers,
you
know
they
start
from
scratch,
they
go
to.
You
know
the
harbor
website
or
github
to
attempt
to
install
the
project.
You
know
fumble
their
way
through
deploying
it
configuring
it
using
some
of
the
latest
features
like
proxy
cache.
So
if
you,
if
you
haven't
seen
that
session
or
any
tgik
session,
I'd
definitely
encourage
you
to
check
it
out.
A
Okay
and
with
that
I'll
hit
it
off
the
camera.
Somebody
stop
sure.
C
B
Cool,
I'm
just
gonna
briefly
be
discussing
the
hardware
container
web
hook.
I'm
cameron
mcavoy,
I'm
a
software
engineer
at
indeed.
B
So,
basically,
the
harper
proxy
cache
allows
users
to
pull
from
a
cache
for
like
docker
hub,
but
sort
of
the
main
barrier
to
adoption
is
that
you're
gonna
have
to
manually
update
deployments,
staple
sets,
etc.
B
To
point
to
the
containers
in
the
proxy
cache
hardware
isn't
doing
that
automatically
this
project
sort
of
solves
it.
The
web
hook
inspects
incoming
pod,
specs,
looks
at
the
containers
and
init
containers
and
mutates
the
image
field
to
point
to
the
proxy
cache
if
configured
I'll
briefly
go
over
the
architecture
of
the
web
hook,
there's
two
modes
of
operation
and
then
sort
of
roadmap.
B
So
this
is
just
a
screenshot.
I
took
of
the
hardware
container
web
hook
being
deployed
in
one
of
our
production
clusters
at
deed.
This
is
screenshot
from
argo
cd.
Specifically
it's
on
the
bottom
here,
there's
a
certificate
resource
and
it's
using
cert
manager
to
issue
and
create
the
certificate
for
the
mutating
web
hook.
There's
a
deployment
resource.
B
The
deployment
has
a
couple
replica
sets
attached
to
it
in
the
pod.
There's
a
mutating
web
hook
configuration
a
service
account
service
associated
with
it
an
endpoint
and
then
a
config
map,
and
the
config
map
contains
the
actual
configuration
for
the
web
hook
and
it's
mounted
into
the
pod.
This
is
all
pretty
standard.
There's
no
gotchas!
This
is
all
deployed
via
the
helm.
Chart
that
I
have
inside
of
the
repo
a
web
hook
has
two
modes
of
operation
that
can
be
configured
a
static
mode
and
a
dynamic
mode.
B
The
static
mode
doesn't
require
any
secrets
being
configured,
and
I
pasted
a
little
bit
of
the
configuration
right
here.
Basically,
on
the
left
hand,
side
you're,
putting
the
registry
that
you
want
to
be
transformed
to
the
hardware
proxy
cache.
So
in
this
case
this
is
the
docker
hub,
endpoint
docker,
dot
io,
and
this
is
hardware.example.com.
B
The
project
name
in
this
case
docker
hub
cache
and
docker.io,
is
a
special
case.
If
the
webhook
sees
incoming
container
image
without
any
registry
configured,
it
assumes
that
that
means
that
it
is
a
docker.io
endpoint
and
we'll
transform
it
accordingly.
So
it
doesn't
need
to
have
docker.io
in
the
prefix,
although
it
would
work
if
it
wasn't
the
prefix
and
then
the
dynamic
mode.
B
Accordingly,
and
here
in
the
dynamic
configuration
you
can
see
that
there's
a
couple
configs,
there's
a
reset
interval
for
how
often
it
re-fetches
projects
to
find
proxy
caches
and
hardware
api,
there's
a
timeout
for
how
long
it'll
take
before
it
gives
up
talking
to
harvard
some
tls
settings
and
then
the
actual
hardware
endpoint
that
it's
using
to
hit
the
api
on.
B
And
then
I
just
linked
a
couple
of
the
issues
that
I
thought
were
relevant
and
might
involve
changes
to
the
web
hook.
This
first
issue
is
actually
the
issue
around
the
proxy
cache
user
story.
So
this
is
the
issue
that,
if
harbor
did
adopt
this
web
hook,
I
think
that
it
would
solve
this
issue
and
fill
the
need
there.
B
Hardware
robot
accounts
are
scoped
to
the
project
level,
and
so
they
can't
really
list
projects
and
find
proxy
caches
that
are
configured
on
them
system.
Robot
accounts
might
be
able
to
do
that,
which
case
that
would
remove
the
need
to
have
the
actual
admin
credentials
themselves
embedded
and
that
would
solve
some
of
the
security
issues
with
that.
C
B
I
don't
think
so,
because
this
the
static
mode,
the
the
web
hook,
isn't
actually
pulling
from
any
of
these
registries.
It
doesn't
know
anything
about.
C
Oh
yeah
yeah,
that
makes
sense
sorry
and
the
dynamic
mode.
I
think
we
can
also
use
a
regular
hardware
user
right,
because
if
the
user
has
permission
to
query
the
product,
he
can.
B
C
I
would
like
to
hear
others
opinion,
but
there
may
be
some
very
tricky
scenario
like
there
are
two
products,
also
they're
all
proxy
in
the
same
endpoint.
And
how
do
you
compare
the
I
mean
the
end
point
and
the
host
name
from
which
you
originally
tried
to
pull
the
image.
There
may
be
some,
for
example,
one
using
ipv,
the
other
is
using
hostname
or
something
be
some
tricky
scenario
there.
B
Yeah,
I
think,
like
the
first
one
configured,
the
first
like
the
lowest
project
id
would
probably
win
if
there's
any
contention
around
multiple
proxy
caches.
That
case.
B
C
Yeah
alex,
I
think,
that's
we
need
to
decide
whether
the
I
mean
the
dynamic
mode
is
interesting,
but
maybe
you
want
to
discuss
whether
it's
really
needed
or
not?
Maybe
you
can
accept
it
and
see
user's
feedback,
but
I
think
there
may
be
some
corner
case
that
you
know
lead
to
some
arrow.
A
Sure
yeah
we
can
discuss
this
offline
thanks
cameron,
thanks
for
for
the
presentation
and
the
work
that
you
did.
This
is
super
important
for
users
to
be
able
to
use
you
know
without
having
to
change
all
their
their
specs.
That's
like
the
number
one
requirement
we
had
at
the
very
beginning
it
was.
It
was
too
it
was
too
much
for
the
scope
for
the
first
release,
so
really
nice
job.
Thank
you.
A
Alrighty,
do
you
want
to
share
the
progress
on.
C
D
A
D
Okay,
actually,
our
architecture
is
a
little
bit
of
a
change
from
nasa
time.
In
next
time
we
have
a
connector
in
our
hubble
component,
but
in
currently
we
decided
to
remove
the
connector,
because
I
we
think
that
the
engine
x
to
expose
the
our
matrix
in
is
enough.
We
can
use
the
query
string
to
decide
which
component
to
expose
their
metrics,
so
it
looks
like
a.
E
D
Yeah,
currently
different
hubble
components
we
can
using
the
query
string.
For
example,
this
is
a
hubble
core.
We
can
use
in
this
url
to
expose
our
hubble
course
matrix
and
if
we,
if
we
wanna,
show
registry
matrix,
we
can
change
the
query
string
to
registry,
so
we
can
use
in
the
quest
string
to
get
our
component
matrix
and
we
also
add
a
added.
Oh.
D
Oh
okay,
we
also
add
a
configuration
item
in
our
config.
However,
yaml
file
we'll
add
a
metric
item
here,
and
there
are
three
items
in
style
metric
points
enable
which
it
means
if
we
enable
the
matrix
and
the
circulate
supports
the
pose,
is
used
to
support
animate
the
parasites
we
use
to
expose
our
metrics
and,
as
you
can
see
currently.
D
This
household
is
all
already
running
and
the
matrix
like
like
this,
and
we
can.
D
We
can
also
query
the
aim
from
issues
so
and
we
can
also
using
the
grafana
to
collect
the
metrics,
and
currently
we
implemented
the
instrumentation
middleware
in
the
hubble
core
component,
and
we
also
expose
the
religious
trees
matrix
and
in
the
future.
Maybe
we
will
add
more
instrumentation
metrics
on
the
harbor
core,
like
like
arrows
and
some
other
useful
metrics,
and
besides
that,
we
also
provide
a
expo
exporter.
D
Of
can
see,
as
you
can
see
in
this
slides,
the
exporter
will
expose
our.
D
Production
production,
information
like
the
projects
and
the
project
quarters,
and
something
like
this,
which
may
the
administrator
may
interested
in
or
operator,
may
interested
in
and
our
components
will
export
their
own
instrumentation
metrics
and
the
exporter
is
a
work
in
progress.
Maybe
in
the
next
time
we
can
dim
our
exporters.
D
Oh,
you
mean
this.
This
is
the
request
rate.
D
D
Oh
yeah,
maybe
this
this
is
a
bug
about
io
fix
a
lot,
but
I
think
the
data
is
not
updated,
but
you
can,
as
you
can
see
in
this.
D
B
D
Yeah
but
you
can,
as
you
can
say
in
this
website,
the
code
is
fixed,
but
I
will
request
the
ppr
to
fix
this.
A
Hey
really
great
work
done.
Can
you
go
back
to
the
graph.
D
A
Yeah
yeah,
so
I
think
that's
like
something
we're
looking
for
from
a
harbor
admin
perspective
right,
some
charts,
you
know
plotted
over
time,
but
I
I
think
also
some
histograms
or
bar
charts
or
your
x-axis
would
be
like
your
tenants
or
your
projects
would
make
sense
right.
So,
like
you
know,
requests
or.
A
D
Yeah
but
yeah
this
yeah,
I
think
the
some
metrics
you
you
mention
it.
We
need
to
expose
it
in
the
exporter,
but
the
explorer
is
also
working
progress,
so
maybe
we
cannot
sharing
it
in
this
time.
Maybe
next
time
we
can
see
something
you
mentioned.
A
Okay-
and
you
mentioned
you're
capturing
core
right,
just
the
core
component
hardware,
core.
D
Yeah
yeah:
this
is
a
call
and
the
matrix,
because
this
is
only
as
we
discussed
before
or
currently
we
just
exposed
the
core
and
the
registry.
Maybe
in
the
future
we
can
expose
all
the
components
of
hubble.
A
Right,
I
was
thinking
you
know.
Like
you
know,
anything
going
through
you
know,
even
like
traffic
through
the
web
portal
would
be
interesting
right
or
if
they're,
using
scanners,
or
you
know,
db
things
like
that,
but
yeah.
I
agree
that
core
and
registry
are
the
two
most
important
things
for
the
first
release:
2.2
yeah,
really
nice.
How
long
did
it
take
you
to
get
those
graphs
up?
I
think
grafana.
I've
never
used
before.
D
I
think
oh,
like
gorilla,
I
think
it's
very
easy
to
use
you
just,
for
example,
you
can
use
the
pro.
A
I
was
going
to
say,
I
think
you
know
graffana
and
tools
like
refinance
can
do
so
much
more
than
we
could
ever
put
into
hardware.
I
know
there
were
previous
feature
requests
to
visualize
data
within
harbor
ui.
I
think
our
approach
is
that
we
will
generate
the
data
at
the
level
granularity
that
the
users
need
and
then
they
can
use
something.
A
A
D
Do
you
mean
in
the
hardware
ui
we
add
some
dashboard
features
in
our
hubble
ui.
A
No,
I'm
saying
we
we
we
want
to
avoid
that
right.
We
want
the
users
to.
A
A
You
know,
really
clean
and
concise,
informative,
but
there's
so
much
you
can
do
with
with
data
that
it's
hard
to
to
to
say
like
exactly
what
you're
going
to
harp
us.
So
maybe
we
should
just
avoid
that
all
together,
but
we
should
definitely
get
the
data
that
the
customers
need
or
users
need
to
prometheus.
So.
A
All
right
cool
thanks
to
cameron
for
the
demo,
thanks
for
thanks
to
attention
for
the
demo.
That's
all
I
wanted
to
share.
Are
there
any
other
questions
or
comments.
A
Right
we're
removing
clear
as
an
embedded
image
scanner.
So
when
you,
you
know,
install
harper
you'll
no
longer
have
the
dash
dash
clear
option.
You
will
only
have
trivia
as
as
a
default
scanner,
but
you
can
still
install
claire
in
outlet
tree
fashion.
That's
why
we
did
the
whole
interrogation
services
framework,
so
you
can
still
install
and
pair
with
harbor,
but
it's
no
longer
going
to
be
something
that
comes
with
harbor,
because
I
think
it's
confusing.
E
A
Why
would
you
have
two
embedded
scanners
when
trivia
is
clearly
superior
all
around,
so
the
upgrade
experience
will
be
such
that
if
you
have
clear
installed,
but
that's
the
only
difficult
case
to
handle,
if
you
have
clear
as
your
only
embedded
scanner
right
now,
you
know
we'll
have
some
kind
of
a
psa.
Some
kind
of
disclaimer
during
installation
tells
you
please
upgrade
or
not
upgrade,
please
install
clear
after
you
upgrade
harvard
or
please
consider
using
something
like
trivi.
So
that's,
unfortunately,
the
only
case
we
need
to
that
won't.
A
Have
the
you
know
the
most
intuitive
experience,
because
we're
gonna
be
just
we're
going
to
be
getting
rid
of
the
the
claire
from
the
code
base.
So
2.2
is
still,
I
believe,
we're
still
on
track
for
a
january
release
is.
E
A
A
All
right,
well,
thanks
everyone,
any
other.
F
I'm
henry
so
do
we
have
any
plan
or
any
sessions
in
the
in
the
coming
qcon
and
north
america.
A
Right
yeah,
it's
good
that
you
mentioned
it.
We
do
have
one
session
and
three
office
hours
in
north
carolina,
north
america
coming
up
on
17th.
I
believe.
A
F
F
F
We
can
turn
it
on
from
the
twitter,
so
so
people
will
know-
and
I
can
join
join
the
sessions.
F
Oh
and
therefore
I
have
another
session
with
let
ease
of
payment
me
also
talking
about
the
artifact
management,
so
it
will
be
part
of
the
introduction
of
harvard's
functionalities
and
with
netlist
practice
in
this
area.
So
there
will
be
another
session
in
addition
to
to
the
hardware
session.
So
we
have
a
harbor
and
half
of
the
net
is
practiced
on
on
artifacts
management,
yeah.
So.
A
A
A
E
A
Yeah,
so
I
think
the
next
step
is,
you
know
we
want
to
possibly
move
that
into.
If
it's
deployed
externally
to
you
know
harbor,
then
we
want
to
put
it
under
go
harbor
if
possible,
right
where
everyone
can
have.
I
don't
know
how
cameron
feels
about
that,
but
I
think
you
know.
Ideally
they
would
just
go
to
go
harbor
to
get
everything
they
need
for
that.
B
Yeah
yeah,
I
I
definitely
don't
want
to
keep
the
repo
underneath,
indeed
org,
so
be
happy
to
transfer
that.
E
Yeah
yeah
yeah
before
moving
to
you
know
under
the
gohar
burning
seats,
maybe
we
should
launch
a
further
discussion.
I
think
daniel
has
some
more
just
to
mention
that
he
has
some
comments.
Actually
from
my
side,
I
also
have
some
comments.
Maybe
yeah
I
mean
before
moving
to
harper,
we
can
have
some
further.
A
Yeah
we
can
set
something
up
to
discuss
it
today
is
just
a
sharing
session
right.
So
it's
for
cameron
to
come
talk
about.
You
know
the
work
that
he
did
before
we
merge
it
before.
We
actually
move
it
over.
We,
we
definitely
have
to
do
you
know
some
more
vetting
and
code
cleaning,
so.
E
C
Okay,
okay
to
your
camera
like
like,
shall
we
open
issues?
If
you
have
any
comment
to
your
repo
under
indeed,.
B
Yeah,
you
can
open
issues
on
that
right
now.
That's
fine.