►
From YouTube: CNCF Harbor's Community Zoom Meeting - Feb 22, 2023
Description
CNCF Harbor's Community Zoom Meeting
A
All
right,
we're
recording
hello
everyone.
My
name
is
julim
vasilif
and
I'm.
The
community
manager
for
Harbor
today
is
February
22nd
and
as
the
official
community
meeting.
So
please
follow
the
code
of
contact
and
just
behave
practically
with
that
I'm
gonna
paste
once
again,
the
community
meeting
notes
into
the
chat.
So
if
you
can
add
yourself
audio
topics
at
your
updates
or
whatever
you
want
to
discuss
with
us
today.
So
with
that
I'm
gonna
share
that
same
document
with
you.
A
Okay,
I
hope
everyone
can
see
that
and
with
this
one
yeah
I
can
see
no
much
about
topics
at
it
and
not
many
folks
added
themselves.
But
can
you
can
you
go
ahead
and
then
talk
about
the
cloud
events.
A
B
A
B
Hi
hi
Olly
can
I
share
my
screen.
Oh
yeah,
sure.
B
Okay,
well
guys
I'm
telling
you
today.
My
topic
is
discussing
the
proposal
of
sports
political
event
for
webhook,
but
actually
it's
not
only
for
college
Cloud
event,
but
also
we
do
some
improvement
to
to
improve
the
better
web
hook
management.
So
there
are
two
major
features
for
this
proposal.
One
is
webhook
refactor
and
another
is
cloud
event.
Support.
B
B
So
if
Harbor
can
support
to
integrate
the
cloud
event
to
the
web
hook,
that
can
help
better
connect
to
the
different
vendors
who
follow
the
Cloud
events
back.
That's
that
that
is
benefit
for
Harbor
okay,
so
we
have
two
goals
which
actually
related
to
the
Futures.
One
is
refactor.
The
web
hook
code
base
to
migrate
the
job
to
the
common
tasks
framework.
I
will
explain
why
we
need
to
do
this
in
the
following
section
and
another
is
to
add
the
integration
of
cloud
event.
B
Also,
we
have
two
known
goals.
One
is
one
is
we
will
not
return
all
the
web
hook
job
histories,
because
in
the
previous
Hardware
versions,
this
table
has
never
been
cleaned
up,
so
the
data
volume
of
this
table
may
be
held.
So
it's
not
a
good
chance
to
migrate
all
the
old
data,
so
we
plan
only
to
only
migrate
the
last
one
for
every
event
type
and
another
long
goal
is
we
don't
want
to
implement
all
combinations
of
cloud
events
back
because
it's
all
comes
back.
B
Next,
let's
move
to
the
implementation
for
the
front
end.
Is
it's
the
UI
for
for
this
proposal.
We
will
add
the
payload
formats
option
selector
in
in
the
editor
page
in
the
web
hook
policy.
Edit
page,
it's
not
a
required
option
and
it's
only
only
valued
when
the
notific
type
is
HTTP.
B
B
Okay,
next
is
the
page
for
the
job
histories
in
previous.
There
is
no
no
such
page
for
for
for
for
the
webhole
policy,
there
is
only
a
last
trigger
you
can
see.
You
can
only
sing
the
last
Trigger
Time
for
one
event
type,
but
but
we
refactor
this
part
to
the
Excursion
histories.
Just
like
replication,
Tech
retention.
B
There
is
a
execution
lists
and
when
you
click
one
execution,
it
will
be
redirected
to
the
task
page
in
a
task
page.
You
can
see
the
task
status,
the
task
startime
and
time,
and
also
you
can
click
the
logs
button
to
see
the
logs
when
the
job
running.
B
Okay,
this
is
the
change
for
the
UI
part
for
the
UI
part
and
for
the
back
end
regarding
the
refactor.
We
we
can
first
thing
I
want
to
explain
why
we
need
to
refactor
this
part.
That
is
in
the
previous
version.
Hopper
has
unified
the
scheduler
and
task
framework,
but
the
webhook
is
a
legacy
vendor.
It's
it's
implemented
the
task
and
Excursion
by
itself
in
a
in
a
safe
way,
so
but
other
jobs,
vendors
such
as
replication
tank
retention,
GC
using
the
unified
way.
B
Regarding
these
parts,
it
can
be
summarized
as
following
steps.
Firstly,
we
need
to
create
web
hook,
job
by
manager
or
controller
provided
by
the
task
package.
This
package
is,
is
our
unified
task
framework
and
in
the
next
we
need
to
update
the
job
implementation,
because
in
the
previous
implementations
there
is
no
no
debug
logs
for
that,
so
user
cannot,
oh
well
what
happened
when
the
job
running,
so
we
will
add
more
departments
for
this,
and
next
is.
We
will
introduce
the
new
apis
as
a
unified
style
for
the
operations
of
webhole
jobs.
B
Actually,
there
are
three
apis.
The
first
is
list
executions
for
a
policy.
The
list
API
can
relate
it
to.
This
can
relate
it
to
this
page.
When
user
click
one
policy,
then
it
can
display
the
executions
under
this
policy
that
caught
that
call
called
this
API
these
exclusions
and
the
next
is
the
tasks
API.
B
This
API
can
relate
it
to
this
page.
That
means,
when
you
click
one
execution
in
this
picture,
and
then
it
will
call
the
API
to
list
the
tasks
under
the
executions.
B
B
The
job
will
looks
like
some
information
when,
when
it
starts
job
and
the
what's
the
request
body
and
the
response
code
and
successfully,
then
something
like
this
foreign.
B
We
also
need
to
adjust
the
Legacy
API
Handler
Logic
for
the
compatible
for
the
back
of
water
compatible,
because
so
so
we
we
do
not
remove
the
old
apis,
but
due
to
the
refactor
for
the
webhook
job
Manager
Management,
we
need
to
do
some
of
the
just
the
adjustment,
but
actually
the
two
apis.
One
is
list
jobs
under
the
project
and
another
is
return.
The
last
trigger
information
of
the
project
weapon
policy.
B
Next
is
migrates
the
old
job
roles
to
to
the
new
tables,
because
in
the
previous
in
the
previous
implementations,
the
the
web
hook,
job
is
dot
stored
in
the
table
notification.
Job
and
this
table
never
be
cleaned
up.
So
so
we
need
to
migrate
the
data
reading
this
table
to
the
new
table,
execution
and
task,
but
we,
but
we
will
only
migrate
to
the
last
job
for
every
event,
type
from
the
performance
View
and
the
following
Circle
can
help
to
do
the
migration.
B
B
By
the
way
this
decrypt
can
can
can
only
help
to
migrate
the
data
in
the
database.
It's
a
static
data,
but
if
you
have
some
runtime
jobs,
you
you
need
to
cling
this
by
the
job
service
dashboard,
because
this
these
job
running
jobs,
is
in
the
radius
queue
so
from
the
secret.
We
cannot
clean
this.
B
Finally,
we
need
to
drop
the
table
notification
job,
but
we,
we
may
need
to
add
the
release
notes
to
tell
user.
If
he
wants
to
keep
the
table.
He
should
back
up
this
table
before
upgrade.
B
Okay,
next
is
the
part
about
Cloud
event.
Cloud
event
is
a
spec
for
describing
events
data
in
a
common
way,
so
it
defines
some
attributes
and
type
and
some
Fields
is
required
and
some
Fields
is
optional
here.
I
list
some
important
fields
first,
the
ID
is,
is
a
attribute
to
identify
the
events.
It's
the
required
attributes
and
the
results
is
identify
the
context
in
which
an
event
happened.
B
It's
also
a
record
and
the
spec
version,
spec
or
version
is
point
out
the
which
the
events
use
the
spec
version
right
now,
it's
P1
the
type
is
the
field
to
describe
the
type
of
events
related
to
the
original
occurrence.
B
And
next
for
the
following
at
Fields
is
optional
one.
So
that
means
you
can
have
these
these
fields
or
or
don't
have
it's
okay
to
the
content
type
if
and
stuff
that
the
the
the
content
type
of
the
data
value.
This
data
schema
is
the
schemer
of
the
data.
The
data
is
the
your
event
payload.
B
The
subject
is
the
describe
the
subject
of
your
event,
the
term.
That
means
what,
when
the
event
happened,.
B
Okay,
next
is
the
event
type
mapping
that
means
in
in
previous
versions.
When
you
receive
a
webhold
payload,
you
will
got
the
event
type
for
this
event,
and
if
you
choose
the
cloud
event,
payload
format,
there
is
a
event
type
mapping
to
the
old
old
type
in
the
cloud
event
way.
It's
the
style
is
the
domain.
Dot
object,
dot
action,
so
you
can
refer
to
this
table
to
kill
the
new
type
in
the
cloud
event
way.
B
Regarding
the
regarding
the
interface,
we
need
to
define
a
interface
to
handle
the
event
data
formation,
because
if
we
have
this
interface,
the
user
from
Community,
who
wants
to
contribute
their
custom
or
their
wanted
vendors,
they
can
just
add
adding
the
driver
just
by
Implement
in
implement
the
method
implements.
The
interface
tell
the
following
is
just
to
demo
case
for
how
to
implement
the
methods.
For
example,
this
is
a
Json
formatter.
B
B
B
Okay,
finally,
there
are
two
examples
to
show
the
payload
in
the
cloud
event
way.
First
is
the
push
aspect
you
can
see.
The
spec
version
is
1.0
currently
because
we
follow
this
back
version,
so
so
this
value
is
fixed,
but
if,
in
the
future
we
follow
the
to
the
2.0,
we
should
update
list
in
in
the
in
the
value,
and
next
is
the
type.
The
type
is
the
event
type
you
can
following
the
this
table,
and
next
is
the
source
source
means.
B
What's
the
who?
Oh,
that
means
Who
is
the
source
of
the
event
for
for
for
this
case,
this
is
a
policy.
The
policies
ID
is
wrong
and
under
the
projects
one,
so
that
means
of
the
ID
equals
one
policies.
One
web,
one
web
Hood
policy
under
the
project.
One
trigger
this
event-
and
the
next
is
the
id
id-
is
the
unique
ID
for
the
hardware
case.
We
will
use
the
job
ID
to
to
field
to
fill
in
this
value.
B
The
time
is,
the
event
happened
in
in
the
work
in
what
time
and
Operator
Operator
is
a
hardware
application
extension
value.
It's
not
in
the
cloud
events
back,
but
but
it's
the
hardware
application
Fields,
so
it
it's.
It's
represents
the
event
operator
of
this
of
this
event
and
at
last
year,
is
to
to
fill
data
content,
type
and
data.
The
data
is
your
payload
data
encoded
by
the
data
contents
type
way.
B
B
And
next
is
the
case
for
replication.
It's
something
the
the
these
fields
is
familiar
with.
The
push
artifact,
only
the
only
the
difference
is
only
the
oh
sorry,
it's
a
typo
this
the
data
content
tab
should
be
application.json,
and
the
data
should
be
this
one.
C
So
we
have
the
sorry
I
have
one
question,
so
we
have
the
ability
to
show
the
payload
of
the
record
set
by
Harbor
right
in
yeah.
B
Yeah
he
can,
he
can
click
the
logs
to
see
the
when,
when
the
user
click
the
logs,
he
can
see
the
the
logs
and
in
our
logs
we
print
the
webhook
payload.
B
Okay,
but
we
can
also
consider
put
the
payload
in
one
in
the
Excursion
or
task
some
column
column.
C
B
B
B
Just
just
like
just
like
these
picture
shoes,
we
we
can
start
eating
the
X3
attributes.
Then
user
can
click
it
to
see
the
payload
details.
D
A
No,
can
you
I
think
with
him
proposed
some
changes
on
the
namings
on
that
that
specific
yeah.
C
Okay;
okay,
but
by
the
way,
the
the
Bible
enhancement
are
also
the
support
of
cloud
human
is
the
anchor
feature
for
Humber
2.8
on
sunscreen
thanks.
D
A
B
Because
they
are,
although
they
are
not
not
conflict
but
I,
think
we
don't
don't
don't.
We
do
not
need
to
spill
it.
Two
proposals
I
think
we
can
come
back
track
them
by
one.
A
C
We
just
need
to
refactor
the
existing
code
base
to
to
make
it
have
ability
to
extend
the
the
current
support
scope
from
slide
to
Cloud
events
and
eventually
in
the
future.
We
can
easy
to
extend
to
add
another
can
of
a
web
hook
a
support.
So
so
that's
why
we
decided
to
do
the
refactor
work
for
the
laptop.
So
then,
based
on
that,
we
can
support
Cloud
event
easily
yeah.
A
And
another
one
related
to
this
one:
do
we
have
to
if
I'm,
using
the
workbook
in
some
way,
is
there
a
change
to
the
way
how
I'm,
using
it
and
yeah.
A
C
A
Okay,
thank
you
chenu.
The
next
one
that
we
have
on
our
agenda
for
today
is
give
me
a
sec
is
Simon
from
VMware
wants
to
briefly
introduce
do
introduction
of
something
called
project.
Narrows
Simon.
Are
you
with
us.
F
G
G
Sure
so,
hi
from
Beijing
today
I
will
be
giving
a
very
brief
introduction
to
a
hardware
related
open
source
project
called
project
Narrows.
G
And
project
Narrows
aims
to
address
the
security
challenges
in
the
coordinative
space.
We
are,
we
we
think
that
are
currently.
Organizations
are
typically
Implement
for
qualitative
security
strategy.
G
This
means
image
is
asking
at
the
time
of
inter
introduction
to
a
cluster,
so
vulnerabilities
can
so
vulnerabilities
are
caught
in
your
time
and
the
images
are
flagged
and
workloads
can
be
quarantined.
So
that's
basically
What
project
Narrows
are
provides.
G
G
Users
can
understand
the
overall
security
posture
of
the
of
their
clusters
by
scanning
the
workloads
either
clusters
and,
at
the
same
time,
users
can
ensure
that
the
actual
security
situations
match
their
security,
compliance
expectations
and
a
large
any
breaches
user
can
Define
some
security
policies
or
Security
baselines
in
Project
errors,
and
this
with
these
policies
it
can
quarantine,
workloads,
Source
from
a
valuable
images
and
stop
the
propagation
of
risks.
Additionally,
it's
done
kubernetes
cluster.
Additionally,
with
proton
errors,
the
users
cause
service
configurations
following
the
Cris
benchmark.
G
So
so
you
know
it's
a
GitHub
report
or
this
open
source
project
and.
G
Here
here
is
a
basic
workflow
of
this
project.
Firstly,
image
are
cached
in
Harbor
from
any
third
party
Registries
maybe
like
to
have.
Then
the
image
can
be
scanned
in
Harbor
with
the
there's.
Some
scanners
like
TV,
and
the
security
data
is
generated
in
Hardware
the
security.
Then
the
secure
data
can
be
consumed
by
Levels
by
some
kubernetes
clusters
that
have
progenerals
in
store
and
then
the
scanning
results
of
communist
cluster
is
generated
and
and
some
are
loss
and
some
some
some
of
us
can
be
generated
as
well.
A
E
A
G
H
Today,
we're
going
to
show
a
technical
preview
of
our
project,
the
environment
we're
using
is
set
up
with
an
upstream
kubernetes
and
is
ready
to
use
project
Narrows
under
settings.
A
platform
administrator
must
specify
the
security
data
source
and
today
we're
using
Harbor
in
this
setup.
You'll
create
the
secret
to
connect
to
Harbor
and
fill
in
the
endpoint
of
the
harbor
instance.
Then
Define
the
image
scanning
interval.
H
After
this
setup,
the
images
will
be
scanned
periodically
in
Harbor
to
get
the
static
image
data
which
project
Narrows
will
use
to
compare
to
when
the
workloads
are
running.
You
can
also
add
known
Registries
to
set
replication
rules
in
Harbor,
and
images
from
these
Registries
will
be
rotated
within
Harbor
then
scanned
to
get
the
security
data
on
these
images.
H
H
Now
that
the
configurations
are
complete,
the
security
auditor
can
specify
the
scanning
rules
in
the
policy
section
where
they
can
Define
quarantining
and
security
rules
to
create
a
policy.
There
are
a
number
of
fields
to
fill
out,
including
how
often
the
scan
should
run
whether
it's
weekly,
daily,
hourly
or
custom
users
can
also
fill
out
the
configuration
for
their
open
search
or
elasticsearch
instances.
So
all
the
reports
generated
can
be
aggregated
into
the
central
place,
they're
using
and
easy
to
analyze.
H
H
Here
you
can
choose
whether
you
want
to
generate
a
report
and
quarantine.
The
workloads
that
are
flagged
then
to
view
these
reports
you
find
them.
In
the
assessment
section.
The
application
developer
and
security
auditor
are
going
to
care
most
about
this
area.
We
have
three
types
of
reports
generated
correlating
to
the
three
cut
scanners.
We
specified
in
the
settings.
So
if
you
only
choose
one
scanner,
only
one
report
will
appear.
H
This
visual
shows
the
history
of
the
number
of
containers
scanned
by
project
Narrows,
we're
going
to
dig
into
one
of
them
to
see
more
detailed
information
in
this
report.
Default
namespace
was
scanned,
we'll
drill
down
on
the
deploy
test
tube
in
this
container,
a
critical
vulnerability
has
been
identified
and
because
we
chose
to
automatically
quarantine
workloads
with
issues,
you
can
see
that
this
is
the
action
that
was
performed
in
this
section.
It
shows
the
reports
of
cube
Edge
scanner
that
you
specified
in
the
policy
Cube
Edge
checks,
whether
kubernetes
is
deployed
securely.
H
Currently,
we
support
two
kinds
of
queue:
bench
checks:
those
are
kubernetes
policies
and
work
on
node
security
configurations,
there's
also
risk
scanning.
The
software
packages
inside
the
workload
containers
can
be
scanned
by
the
risk
scanner,
and
here
are
the
scanning
results.
We
also
got
the
score
from
zero
to
five
for
measuring
the
severity
of
a
vulnerability
for
each
cve.
Another
area
for
the
security
auditor
is
to
view
not
only
the
security
posture,
but
also
the
risk
Trends,
which
you
organized
into
three
categories:
cluster
namespace
and
workload.
In
the
cluster
section.
H
You
can
see
how
the
winning
workloads
have
been
scanned
and
how
many
violate
the
Baseline
in
the
namespace
section,
you
can
drill
down
by
namespace
to
view
the
total
workload
scanned
and
how
many
are
vulnerable
to
see
the
violation.
In
the
workload
section,
you
can
see
all
the
workloads
that
have
been
scanned
by
project
Narrows,
with
the
option
to
drill
down
hope
you've
enjoyed
this
Tech
preview
of
project
Narrows.
In
this
video
we've
shown
how
you
can
add
runtime
scanning
to
your
security
Arsenal
by
creating
policies,
quarantining
workloads
and
viewing
your
overall
security
posture.
F
G
A
E
A
G
Yeah
and
actually
we
have
a
multiple
security
data
source.
The
major
the
security
data
source
is
for
Harbor
from
the
CV
DB.
That's
all
that
is
for
Hammer
another.
Another
kind
of
secure
data
is
something
like
the
the
rules
of
of
misconfigurations.
That
is
from
another
tactic.
Yeah
integrate.
G
Yeah
right,
so
we
in
our
roommate
we
plan
to
we
plan
to
also
integrate
some
some
something
kind
of
like
a
runtime
security
detection,
something
like
a
fair
call
or
to
to
to
get
to
collect
the
the
runtime
information
or
runtime
or
postures
to
project
Narrows.
So
user
can
know
much
more
clearly
about
the
security
yeah.
A
Yeah
and
last
thing
from
my
side
how
we
can
help
from
the
community
the
hardware
Community
with
that
project:
do
you
expect
anything
from
my
side
or
our
site
or
just
like
to
spread
the
word
a
bit
that
you
integrate
with?
However,.
G
External
capabilities
for
the
runtime
security
detection
by
using
project
Narrows
so
because
it
integrates
with
integrated
with
Hardware
very
smoothly
and
so
I
think
we
can
yeah.
We
can
promote
this
project
to
to
some
of
the
hardware
users.
F
A
Okay,
I'll
ask
you
to
contact
me
offline
with
some
with
some
blocks,
or
something
like
that,
so
we
can
share
that
with
the
community
and
they
can
give
it
a
try
or
provide
you
some
feedback.
Okay,
thank
you.
Thank
you.
So
much
yeah.
Thank
you
all
right.
Thank
you
very
much
Simon
anyone
questions
for
Simon
about
this
project
or
we
can
move
on
to
the
next
topics.
A
I
Yes,
yes,
it's
hopefully
a
short
topic,
but
let's
see
in
Harbor
operator,
we
right
now
have
the
problem.
So
the
custom
resource
definitions
are
quite
large
in
our
operator
and
there
have
been
like
some
minor
additions
in
the
recent
pull
requests
and
the
problem
has
occurred
now
that
the
resources
are
too
large
for
our
Helm
chart.
So
basically,
Hub
operator,
together
with
the
cids,
is
packaged
at
the
hunt
job
and
now
that
Helm
chart
exceeds
one
megabyte,
which
is
by
a
Helm
a
specified
maximum
size
for
resources
that
can
be
deployed
through
hum.
I
If,
if
anyone
has
ideas,
please
speak
up
otherwise,
I'm
also
happy
to
hear
on
slack
about
ideas
and
how
to
go
around
this.
I
F
A
Experience
yeah,
it's
not
beautiful,
yeah
I
I
never
came
across
this
one
in
in
the
past,
especially
with
Harbor
anyone
else
from
the
maintenance
team.
A
Yeah
more
deeper
with
the
other
like
operate
and
maintain
us.
Maybe
they
can
help
just
give
them
time.
Maybe
they
can
respond.
Yeah.
E
C
You
yeah
I,
I,
yeah
I
know
you
want
more
maintenance
from
operator
team
to
discuss
the
video
from
montages.
Maybe
some
options
can
be
removed
from
the
Hampshire,
but
we
should
review
them
one
by
one
and
so
carefully
yeah,
but
so
for
your
your
problem.
We
can
discuss
in
the
sub
Channel.
I
C
A
All
right,
I
can
see
some
new
faces,
but
before
I
give
the
floor
for
everyone
else
to
introduce
themselves,
I'm,
not
sure.
If
we
have
introduced
already
omno
to
the
team,
I'm
I'm.
No
can
you
can
you
introduce
yourself?
Please.
D
J
Don't
know
you
yes,
hello,
everybody,
my
name
is
anmol
and
I
am
going
to
be
the
product
manager
for
Harbor
from
VMware
side,
and,
apart
from
that,
I
also
look
at
packaging
and
deployment.
So
looking
forward
to
working
with
everyone.
A
So,
in
short,
unmo
will
take
over
from
rotor
and
continue
the
the
work
on
the
Harbor
project
from
VMware
site.
So
if
you
have
any
roadmap
in
this
kind
of
feature,
requests
and
stuff-
and
more
is
your-
is
your
guy
and
I
can
see
some
some
new
faces?
C
I'm
sorry
I
already,
could
you
have
have
to
add
ammo
into
the
Uncle
Harbor
repository?
So
then
we
can
assign.
A
Thank
you,
I'll.
Add
you
to
the
to
the
GitHub
repository
organization
level
and
our
rdtd
remaining
lists.
So
you
can.
You
can
follow
up
on
some
stuff
there.
So
thank
you.
Thank.
A
All
right
the
rest
of
the
the
folks
on
the
call.
Do
you
want
to
introduce
yourself
for
for
those
of
you
who
are
feeling
like
it
and
new
to
the
to
the
audience.
F
E
A
Nice,
so
you
work
together
with
the
Jeremy
or.
A
Yeah,
okay,
great
because
in
the
past
we
we
had
Jeremy
in
Pierre.
He
was
also
in
the
ovh
right.
E
Yeah
and
Simon
too
yeah
yep,
yeah
Jeremy
is
still
here,
but
he's
now.
My
manager
and
NPR
left
the
company
yeah
two
years
ago.
So.
A
Yeah
he
yeah,
he
went
to
data
doc,
I
think
yeah
yeah,
and
so
what
are
the
plans
from
ovh
site
you
to
to
be
become
part
of
the
maintenance
team
and
represent
the
ovh
into
the
Harbor
community?
Is
that
am
I
shooting
right
or.
C
E
Like
to
to
help
on
our
operator
subject
but
like
I
say
to
to
Yen,
we
are
a
bit
late
on
the
version
our
customers
are
on
version
Arbor
2.4,
so
we
need
to
to
upgrade
to
upgrade
it.
We
are
working
on
it
and
what
we
will
we
will
be
on
last
version.
We
will
work
on
operator
to
to
handle
new
Hubble
versions.
A
Nice,
okay,
so
we
have
since
ovh
was
not
super
active
in
the
last
few
months
of
say
here
yeah
we
have
joined
by
Marcelo
from
transform
and
myself
and
with
him
on
the
other
side,
they're
collaborating
on.
H
A
Operator
work,
so
I
think
you
can
get
in
touch
with
myself
offline
or
something.
So
you
can
brief
you
out
with
the
work
and
then
what's
going
on
Kurt
right
well,
yeah.
E
E
D
C
Hi
Thomas
I
I
have
an
idea
so
so
cute
is
it
possible
that
we
can
set
up
a
weekly
or
by
weekly
meeting
for
a
hub
operator
development
to
to
discuss
some
engineering
stuff
like
a
CIA
or
PR
issues?
So
it's
I'd
like
to
see
several
customer
from
Community
or
I'm
interested
on
a
hardware
operator.
So
we
can
set
up
a
regular
engineer
meetings
to
discuss
engineering
items
so
yeah.
So
is
it
possible
to
to
organize
that
you
can
handle
or.
I
A
J
D
A
Okay,
so
I'll
get
in
touch
with
you,
so
we
can
figure
this
out
all
right.
Thank
you
and
welcome
Thomas.
Thank
you.
Anyone
else
or
we
can.
A
A
Nope
I
can
see
your
move.
Your
lips
moving,
but
yeah,
okay,
Marcel
yeah
I,
have
to
drop
this
though
I
just
want
to
bring
in
another
topic.
Okay,
I
guess:
take
your
time.
I'm,
I'm,
gonna
just
drop
another
thing.
We
we
had
a
request
from
the
folks
from
what
was
their
name.
A
Give
me
a
sec,
you
something
yeah,
uvz
I,
suppose
that's
how
you
pronounce
the
name
and
they
propose
us
to
implement
as
part
of
our
CI,
to
spin
up
a
harbor
instance
for
the
testing
of
the
pr.
When
the
pr
is
closed,
they
can
destroy
that.
So,
if
you
can
check
I've
added
that
into
the
into
your
community
meetings,
notes
for
today,
I
think
it's
pretty
cool,
I've
tested
it
out
with
the
for
backstage
and
it's
pretty
responsive.
So
maybe
we
can
think
about.
A
They
don't
require
anything
from
my
side,
our
side
just
to
let
them
do
it,
which
is
okay,
I!
Think.
So,
if
you
take
a
look
at
this
and
just
provide
some
feedback,
if
it's
you
think
it's
okay
for
us
to
have
such
thing,
but
I
think
I
find
it
nice.
C
It's
a
possible
to
to
have
someone
from
the
same,
to
have
a
to
give
a
timer
for
for
Harbor
to
how
to
use
that
and
I
think
it's
useful
for
us.
It's.
D
A
All
right
I'll
try
to
get
in
touch
with
them
to
to
do.
Actually
we
can
drop
a
line
into
the
pr
that
they've
opened
if
they
can
join
us
on
our
next
community
meeting,
so
they
can
do
some
demo
and
elaborate
a
bit
more
yeah
all
right,
okay,
okay,
thanks,
okay
and
I.
Guess
second
chance,
man
go
ahead,
no,
no
worries!
Okay!
We
can
try
next
time,
eh.
No,
no,
no,
no
worries!
A
Yeah,
okay,
all
right,
yes,
are
you
are
going
to
try
again
I
suppose
you're
from
the
Linux
Foundation
main
T
program,
if
that's
correct,
yeah,
just
note,
okay
to
brief
you
on
this
one,
we're
gonna,
do
review
of
the
mentees
tomorrow,
I
hope,
with
oneion
and
and
Vadim,
and
we're
gonna
select
and
continue
with
the
program
as
expected,
yeah,
so
yeah.
Thank
you
very
much
again.
Chenu.
Thank
you
for
for
the
demonstration
Simon.
Thank
you
as
well.
If
that's
it.
Thank
you
very
much.