►
From YouTube: Harbor Community Meeting - March 10, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hey
everyone,
welcome
to
the
harvard
community
meeting
today
is
march.
A
I
don't
have
too
much
of
an
agenda.
I
just
wanted
to
talk
about.
You
know
the
year
we've
had
what
a
great
year
and
then
you
know
we
discussed
sharing
the
roadmap
with
with
everyone
on
the
first
of
every
month.
We
have
like
a
longer
release
schedule
of
three
months,
so
maybe
maybe
not
monthly.
Maybe
you
know
like
every
two
months
or
at
the
beginning
of
every
release
but
yeah
for
today.
I
just
wanted
to
talk
about.
A
You
know
things
that
we've
accomplished
in
2020
things
that
we
are
trying
to
accomplish
in
2021
yeah,
that's
it
and
then
I
think
jonas
has
a
pr
out
on.
A
You
know
this
new
new
concept
of
a
project
maintainer,
which
I
think
is
great.
That
sounds
good
to
everyone.
So
maybe
we
can
talk
about
your
pr
at
the
very
end.
Join
us.
We
think.
B
Yeah
absolutely
so.
This
is
based
on
conversations
that
we've
had
for
a
while
now
and
jolin
pinged
me
last
week
to
say:
hey.
Maybe
we
should
write
this
up
and
make
a
better
proper
proposal.
So
I
did
the
the
essence
of
this
is
essentially
to
make
sure
to
give
contributors
more
autonomy
within
the
project
and
to
make
sure
that
we
can
rely
on
more
community
members
and
more
sub-project
maintainers
to
maintain
sub-projects
of
the
of
harbor.
B
So
we
want
to
enable
individuals
to
essentially
take
ownership
of
certain
parts
of
harbor
without
them
being
burdened
with
the
ownership
of
the
entire
project,
so
having
ownership
of
a
smaller
sub
project
and
be
heavily
involved
within
harbor
on
that
sub
project,
instead
of
taking
on
all
of
the
responsibility
as
a
core
maintainer,
so
that's
kind
of
it
giving
people
more
more
ownership,
more
autonomy
and
yeah
building
up
the
community
of
sub-project
maintainers.
A
Okay
yeah,
I
guess
we
just
discussed
it
so
there's
a
pr
out
on
the
community
repo
in
co.
Harbor,
everyone
should
go.
Take
a
look
and
you'll
give
your
thumbs
up
thumbs
down,
share
your
thoughts.
I
think
it's,
I
think
it's
great.
I
was
a
little
hesitant
at
first
because
I
think
it
creates
some
confusion
like
a
project.
Maintainer
and
we'd
have
to
establish
a
process
around.
A
When
does
the
product
maintain
become
a
core
maintainer
right
like
how
do
you?
How
do
you
become
a
core
maintainer
and
what
is
the
end
goal
of
being
a
project
dinner?
Is?
Are
you
just
trying
to
work
on
that
specific
aspect
of
the
project,
or
are
you
trying
to
you
know
slowly,
work
your
way
up
and
be
more
involved
and
become
a
coin
trainer.
I
don't
know
those
are
all
good
questions,
but
I
think
it
does
allow
for
people
to
get
involved
more.
B
Yeah,
absolutely
I
I
see
it
kind
of
as
a
similar
take
on
what
we
do
within
the
kubernetes
project,
where
we
have
sub-project
owners.
We
have
sig
chairs.
We
have
a
bunch
of
different
sigs
that
have
specialized
interests
in
certain
pieces
of
the
project
and
not
everyone
can
dedicate
their
time
to
be
a
core
maintainer,
but
being
a
sub-project
maintainer
might
work
depending
on
work-life
balance.
A
Okay,
cool
yeah,
so
I'm
just
going
to
go
down
this
list
here.
Talk
about!
You
know
what
we
deliver
in
2.2,
some
of
the
highlights
through
here
and
then
the
road
map
for
2021.
feel
free
to
ask
any
questions.
A
You
know
coming
off
or
whatever
or
you
can
just
you
do
a
q
a
at
the
end,
so
yeah
we
released
2.2.
You
know
this
was
a
long
release,
probably
almost
four
months
so
right
now
we
have
you,
know
sustainable
projects
or
multi-project
scopes,
robots,
I'm
sorry!
So
these
are
robot
accounts
that
can
reach
into
multiple
projects
and
have
apis
a
specialized
set
of
api
permissions.
A
We
have
integration
with
with
prometheus.
Finally,
and
then
we've
extended
proxy
caching
capability
beyond
docker
registry
to
the
gcr
clay,
ucrna
acr
right
so
pretty
much
all
the
popular
registries,
and
especially
all
the
ones
functioning
as
public
registries
that
we
view
as
critical
as
the
community.
A
A
The
aqua
csp
enterprise
scanner
integration-
this
is
essentially
a
byproduct
of
the
system,
level,
robot
accounts
and
then
we've
been
saying
for
a
while.
Now
that
we're
going
to
deprecate
claire-
and
we
finally
did
so-
I
hope
this
wasn't
too
much
of
a
surprise
to
anyone.
We've
been
calling
it
out
in
the
community
meetings
inside
channels,
so
yeah.
So
please
take
a
look
at
2.2.
A
A
2020
has
been
such
an
incredible
year,
so
you
know
this
is
the
first
vmware
originated
project
to
graduate
from
cncf.
It
was
incubated
fully
in
beijing,
research
and
development
in
2015.
Actually,
I
believe
so
it's
been
quite
a
journey.
You
know
the
project
stats
that
I've
listed
here.
The
numbers
pretty
much
speak
for
themselves.
A
We
want
to
thank
all
the
external
maintainers,
all
the
contributors,
people
who
attend
these
community
meetings
like
every
every
little
bit
helps.
A
A
They
typically
support
the
latest
release.
We
support
two
minor
releases
with
you
know:
lots
of
minor
patches,
so
I
think
that's
really
impressive.
A
Yeah
so
moving
on,
we
were
the
first
registry
to
support
machine
learning
on
kubernetes
workloads
such
as
creep
flow.
This
was
something
that
we
worked
on
with
by
dance,
and
this
customizability
was
made
possible
by
conforming
to
oci
specs.
A
The
proxy
caching
was
really
critical.
The
timing
was
really
good
right.
We
released
it
right
before
the
docker
hub
raid
limiting
went
into
effect.
I
think
it
was
a
presentation
that
we
did
a
while
ago
from
shikhar,
which
runs
an
I.t
team
within
vmware,
and
you
know
he
came
out
and
shared
his
experience
about
using
carver
to
support
lots
of
internal
teams
who
were
getting
hit
with
docker
docker
hover
limits.
A
The
hardware
operator
is
something
we've
been
working
on
for
you
know
a
couple
months
now
we
have
a
semi-working
prototype,
but
you
know
it's
going
through
some
heavy
edits
and
this
could
not
have
been
possible
without
ovh
cloud.
You
know,
I
think
we
always
thought
the
operator
as
a
pattern
wasn't
quite
there
yet
for
the
main
stage.
A
We
had
our
reservations
going
into
it
and
it
feels
a
bit
heavy-handed
for
you
know
single
deployments
of
harper
and
how
is
super?
Easy
so,
but
I
think
you
know
through
this
journey
we
learned
a
lot
about
the
intricacies
of
the
operator,
and
you
know
just
the
total
range
of
capabilities
that
it
can
provide,
and
I
think
we'll
see
some
you
know
massive
dividends
down
the
line.
A
So
here
we
have.
The
list
of
companies
that
are
based
on
harper
are
powered
by
harper.
You
can
see,
I
split
those
into
two
categories
here,
like
I
mean
what
a
lineup
of
companies
that
run
on
harbor
and
use
harbor
internally,
you
know
this
speaks
volume
about
the
the
features
and
the
service
ability
of
the
project.
A
I
was
totally
astonished
when
I
was
combining
this
list
of
customers,
so
you
know
we
have
it's
used
pretty
heavily
in
vmware
itself,
so
the
vmware
township
network
registry
is
the
delivery
platform
for
a
lot
of
projects
as
customers
that
you
would
download
from
vmware
kansu,
which
is
our
vmware
flavor
kubernetes
platform,
also
uses
hardware
quite
a
bit,
and
then
we
have
ovh.
We
have
suzy
castrati's
cast
registry
tencent
enterprise
rancher
contain
register.com,
like
all
these
guys
are
using
harper.
It's
just
incredible.
A
So
I
just
you
know,
I
just
want
to
say
thanks
to
everyone
who
played
a
huge
part
in
this.
You
know
big
shout
out
to
the
maintainer.
C
A
Contributors
to
everyone
in
the
community
who
did
anything
with
harper,
you
know
be
it
using
harper
and
sharing
it
by
word
of
mouth
or
you
know,
participating
in
a
github
project
like
all
of
you
helped
the
project
get
to
where
it
is
today.
So
jonas
or
anybody
else,
you
know
feel
free
to
jump
in.
I
feel
like
I'm
doing
all
talking,
but
you're
doing
great.
B
Yeah,
I
I
just
want
to
say
thank
you
to
the
entire
community.
It's
been
a
fantastic
year,
as
alex
said,
it's
been
super
fun
to
to
see
the
the
community
grow
and
the
adopters
list
grow
as
well
and
yeah.
It's
just
been
fantastic.
A
Yeah,
I
think
you
know
we
did
a
really
good
job
with
the
documentation
as
well.
Abigail
did
a
really
great
job
with
you
know,
like
some
lots
of
additions
to
the
docs.
I
think
the
docs
are
really
important
for
an
oss
project
and
I
think
you
know
she
strikes
a
good
balance
between
technical.
A
Project
like
harvard
to
attract
a
wide
variety
of
users
right,
so
we
have
users
from
all
the
world
and
yeah.
I
think
the
docs
has
been
a
highlight
for
sure
for
the
last
year,
so
we
have
a
ticket
here.
Let's
pin
to
the
github
project.
Yeah
just
share
your
feedback
of
how
you're
using
it.
You
know
you
can
feel
free
to
share
your
employee
info,
but
you
don't
have
to
it
can
be
anonymous.
We
just
want
to
know
how
you're
using
projects
what
you
like,
what
you
don't
like
so.
C
C
Hi
alex
thank
you
for
organizing
all
these
projects
during
last
year,
and
I
want
to
add
to
that-
is
that
the
harvard
image
actually
has
ordered
a
book
in
chinese,
though
for
the
for
the
harvard
project.
So
it
was
published
last
year
and
circulating
in
the
in
the
chinese
developer
communities
and
now
we're
working
on
english
version
of
the
book,
and
hopefully
we
can
get
it
published
this
year
sometime
this
year.
C
If
anyone
is
interested
in
the
in
the
in
the
community
to
help
with
the
briefing
the
version
are,
you
welcome
to
let
us
know,
and
we
can
maybe
send
you
a
rough
cut
when
it
is
available.
So
for.
A
C
Well,
I'm
finding
some
of
the
translator.
Now
when
we
have
the
right
people,
then
we
can
probably
start
study
the
translation
job
now,
so
it
will
be
take
a
while.
So
hopefully
we
can
sometime
the
second
half
of
this
year.
You
may
be
able
to
get
a
english
version
at
a
time
around
that
time.
C
Back
to
you,
alex.
A
Thanks
harry,
if
you
have
a
link
to
the
book,
I
can
add
it
to
the
community
meeting.
If
anybody
wants
this
interested.
C
A
Yeah
so
I'll
say
quick
word
a
few
words
about
the
road
map
for
2021.
So
you
know
the
harbor
operator
is
one
of
the
biggest
things
that
we're
working
on
right.
Now,
it's
a
great
hook
for
any
company
offering
harbor
as
a
service.
So
you
know
that
will
massively
improve
day
two
operations
provide
enterprise
aj,
you
know
to
harbor
as
a
case
cluster
with
you
know,
redundancy
across
your
key
components.
A
Fast
scale
up
scale
down
sensible
defaults,
we'll
have
a
story
around
backups
and
restore
so
the
operator
is
being
actively
worked
on
right
now
under
the
gull
harbor
project
feel
free
to
take
a
look
and
see
if
you're
interested
we're
planning
on
a
release.
I
think
maybe
within
the
next
month
or
two.
A
So
the
other
thing
we're
working
on
is
hyperlite.
This
is
a
lightweight
registry
instance.
I
mean
that's
great,
for
you
know
the
serving
workloads
at
the
edge
nodes.
So
this
is
going
to
be
a
hardware
with
a
reduced
feature
set
with
a
smaller
footprint
right.
So
it's
a
smaller
set
of
containers
running,
possibly
without
you
know,
signing
or
scanning
if
the
images
can
only
be
propagated
from
the
central
data
center.
Harbor,
so
think
so.
A
So
notary
v2
is
something
we've
been
working
on
for
about
a
year
now,
which
is
like
how
do
you
persist
the
image
signature
across
different
across
images,
things
between
different
registries
right?
We
don't
have
a
really
good
story
around
that
and
we're
fully
dependent
on
notary
upstream
to
solve
this,
so
we're
looking
to
increase
our
equity
in
this
project,
working
with
other
teams
within
vmware
to
drive
this
progress
in
order
e2.
A
So
docker
distribution
was
a
project
that
was
recently
it
was
an
old
project,
but
that
was
recently
donated
to
cncfs
sandbox
and
so
two
two
of
the
hardware
maintainers
have
joined
the
dock
distribution
as
maintainers
as
well
and
will
be
spending.
You
know
quite
a
bit
of
their
time
on
docker
distribution
upstream,
because
this
is
still
really
a
critical
component
of
harper.
A
So
you
know
we're
working
with
other
maintainers
from
other
projects
like
gitlab,
github,
docker
digitalocean.
I
think
I
think
we're
working
on
setting
up
the
ci
right
now
and
getting
a
release
out
as
soon
as
possible.
If
you
follow
the
project,
you
know
they've
been
on
a
pretty
long
hiatus,
so
we're
looking
to
get
a
release
out
as
soon
as
possible.
A
Yeah
I
mean
we've
had
lots
of
great
ecosystem
partnerships.
You
know,
we've
had
good
partnerships
with
susie
with
huawei,
obviously
working
with
ovh
cloud
on
the
development
of
the
harbor
operator
like
these
are
all
things
that
we're
looking
to
invest
and
to
make
harvest
stronger.
You
need
to
have
great
partners,
so
security
is
a
big
story
for
2021.
You
know
we're
looking
at
integration
with
a
project
called
turn,
which
is
a
bomb
scanner.
A
So,
in
addition
to
scanning
for
container
image
vulnerabilities,
you
know
you
can
scan
for
the
full
list
of
dependencies
of
your
images.
License
checkers,
possibly
we're
looking
to
incorporate
with
the
open
policy
agent
framework
to
gain
access
to
image
pools
from
harvard
you
know,
based
on
signatures
or
cv,
scan
results
right
like
right.
Now
we
have
some
of
those,
but
this
will
allow
us
to
write
more
complex
logic
around
the
ability
to
pull
images
or
to
replicate,
or
you
know,
take
it
off
to
the
p2p
side.
A
It's
called
fips
140-2
and
it's
basically
a
specific
set
of
tls
cipher
suites
that
are
allowed
for,
for
you
know,
government,
and
then
that
will
play
into
upgrading
our
base.
Images
from
you
know,
photon
to
possibly
a
higher
version
of
photon
or
something
that's
that's
ubuntu,
based
which
we're
exploring
right
now
and
then
combining
the
entire
sets
of
hardware
images
with
the
the
necessary
tool
goal
line
tool
chains
to
become
fully
fips
compliant.
A
So
if
you're,
a
company
that
hasn't
been
able
to
use
hybrid
because
of
you
know
certain
compliance
requirements,
you
know
these
are
things
that
we're
looking
into
right
now
so
yeah.
I
know
this
goes
into
the
the
harper
as
an
edge.
A
A
And
then,
finally,
we
have
been
working
with
a
company
called
a
lotta
to
release
an
arm,
64-based
harbor.
I
think
it's
mostly
done.
We
just
have
to
put
it
out
into
the
harper
project,
but
this
is
something
that
the
community
has
been
asking
me
for
a
long
time
yeah.
I
just
think
it's
interesting
and
something
that
would
be
impactful.
For
you
know,
companies
downstream
looking
to
use
hardware
as
well.
A
So
that's
pretty
much
it.
You
know
I
compiled
a
list
of
some
of
the
most
critical
items
for
2021.
I
don't
know
if
I,
if.
A
D
You
know
you
know
there
have
been
some
community
users
that
have
been
using
harvard
seriously
for
a
while
and
they
start
to
see
some
performance
issues.
Unfortunately
so
yeah,
so
we're
gonna
work
together
and
improve
that
and
yeah
we're
gonna,
have
regular
meeting
to
set
the
goals
and
have
a
framework
to
you
know,
add
more
test
cases
in
the
performance
test
suite
and
to
make
sure
you
know
that
as
the
hardware
scales,
it
still
reaches
certain
criteria
in
terms
of
performance.
D
A
Yeah,
I
think
I
saw
an
issue
with
postgresql
long
running
environments.
A
So
yeah,
I
don't
think
these
are
listed
in
order
of
importance
necessarily
but
in
some
ways
that
they
are,
but
the
operator
is
super
important.
You
know
we
have
a
goal
to
deliver
harbor
light,
which
might
be
a
separate
harbor
instance
under
the
gulf
harbor
project.
So
I
don't
know
how
we're
going
to
do
it.
Yet
what
it's
going
to
look
like,
but
it's
going
to
be
a
smaller
harbor,
but.
B
A
So
the
board
is
a
little
outdated
right.
Now
we're
trying
to
I'm
trying
to
lock
down
the
stories
for
2.3.
A
You
know
that
are
relevant
to
to
the
community
and
then
I'll
put
that
I'll
move
those
over
to
2.3
swimlane
as
well
and
I'll
turn
the
product
suggestions
into
product
commitments.
So
we've
already
started
on
2.3
actually.
E
I
have
maybe
a
question
or
is
a
topic
that
I
would
like
to
bring
up.
So
there
was
a
recently
a
question
from
the
kubernetes
community
because
they
are
running
their
own
container
registry,
which
is
currently
hosted
on
on
gcp,
and
they
reached
out
to
the
harvard
community
to
ask
for
support
because
they
would
like
to
to
yeah
use
harbor
for
that
and
I
reached
out
to
them
and
asked
if
we
could
help
them
somehow
or
you
know,
because
they
have
some
some
requirements.
E
Maybe
a
bit
unusual
requirement,
and
so
I
I
don't
know
if
you
know
a
bit
more
than
from
the
community
there
is.
There
is
some
effort
going
on,
because
I
had
a
contact
with
two
two
of
the
community
who
are
responsible
for
for
this.
For
this
effort
and
so
far
I
could
only
see
the
requirements
and
but
yeah
I'm
not
sure
how
we
could
help
them
or
how
we
could.
A
Yeah,
so
do
you
have
a
do?
You
know
where
what
the
ticket
number
is.
A
E
F
C
E
They
asked
this
question
on
the
on
the
slack
channel
and
also
on
the
the
kubernetes
channel,
and
it's
basically
this
this-
this
configure
registration
project
from
value
cml
file.
So
it's
basically,
but
it's
not
yeah,
it's
it's
kind
of
not
precise,
because
they
have
more
requirements
there
and
yeah.
Maybe
I
can
update
the
issue
there
because
I
I
know
there
is:
there
have
a
kind
of
documentation,
kind
of
a
markdown
file
where
they
specify
their
requirements
and
maybe
we
should
unders.
E
I
think
maybe
we
just
should
kind
of
focus
on
the
issue
there
and
put
everything
in
the
issue
so
that
we
have
kind
of
unified,
because
now,
with
some
discussions
are
on
on
the
kubernetes
channel,
some
discussions
are
on
on
the
harbor
channel,
so
it's
kind
of
a
bit
distributed
at
the
moment.
There
is
no
single
point
of
contact
around
this
topic.
B
Yeah,
I
see
the
the
slack
messages
from
sunday
march
7th
in
the
harvard
channel
there
from
hippie
hacker
and
caleb
would
buy
yeah
yeah
exactly.
E
B
It's
not
an
issue.
It's
a
it's
comments
in
the
harbor
channel,
the
cncf
I'll
I'll
ping.
You
in
there.
B
I
think
this
would
be.
This
would
be
great,
but
anyway
you
were
you
looking
at
using
containers
registry
as
the
solution
for
that
or.
E
So
because
they
have
this
kind
of
self-hosting
approach
and
currently
they
get
credit
by
google
and
they're
going
to
get
credit
from
other
cloud
providers
and
then
they
want
to
distribute
the
right
containers
on
different
locations
so
that
people
kind
of
kind
of
distributed.
So
people
from
around
the
world
can
fetch
the
images
like,
for
example,
china,
because
it
is
restrictions
there
and
then
also
the
north
america
and
europe.
So
that's
yeah.
A
Yeah
I
mean
I
don't
I
don't
have
the
answer
to
that
right
now.
I
think
that
you
know
daniel
gave
pretty
good
answers
to
the
first
two,
the
other
two
we
can
think
about.
A
Yeah
I
mean
that
this
kind
of
reminds
me
of
another
thing
that
I
should
talk
about
in
the
road
map
which
is
you
know
there
was
an
issue
to
you
know
complaining
how
the
configuration
of
harper
has
changed
quite
a
bit
since
I
don't
remember
what
the
first
nail
was.
It
was
like
1.6
or
1.7,
pretty
early
pretty
early
and
what
we're
thinking
now
is.
A
You
know
we
essentially,
companies
have
requirements
like
compliance
or
legal
requirements
to
track
any
changes
made
on
harper
by
the
system
admin,
and
so
they
want
to
push
those
configuration
changes
from
a
config
system
right
from
ci,
so
you
can
track
who
made
the
change
when
for
what
purpose?
So
what
we're
thinking
about
right
now
is
maybe
locking
down
the
api
access
for
the
system
level
settings
and
then
just
having
them
come
from
a
ci
instead,
so
taking
and
taking
out
api
access.
A
D
Let
me
explain
a
little
bit
background
is
the
requirement
is
that
they
want
to
configure
harbor
in
a
declarative
way,
but
but
you
know
that
it's
possible
after
hubbard
is
started.
The
admin
can
update
the
configuration
using
api
configuration
api
yeah,
so
we
want
to
make
sure
there's
only
one
way
to
do
that.
So
currently,
the
decision
is
that
if
we
allow
user
to
configure
hardware
declaratively
we're
going
to
disable
the
system
configuration
api
at
the
same
time.
D
D
Does
that
make
sense,
we
are
working
on
the
proposal.
Hopefully
in
the
next
committee
meeting
yeah,
we
can
review
it
with
the
community.
E
Know
it's
it's
an
opt-in,
okay,
yeah,
because
I
I
remember
that
that
portals
had
the
same
functionality,
that
kind
of
declarative
and
on
one
hand
it
was
quite
good,
but
on
the
other
hand
it
was
yeah
also
very
difficult
to
for
people
to
maintain
their
instance
if
they
don't
have
access
to
the
configuration.
So
this
was
kind
of
a
double-sided,
because
on
one
hand
it
was
good
because
you
could
configure
a
lot
of
config
files.
E
A
Great,
I
don't
know
if
it's
opt-in,
yet
I
think
that's
still
something
we're
thinking
about,
but
yeah,
maybe
making
it
optional
would
be
good
because
once
you
do
it,
you
know
any
changes
that
you
need
to
make
will
re-trigger
restart.
E
E
And
it
it
was
good
on
one
hand,
because
you
have
everything
in
the
config
file
and
then
you
restart
the
instance
and
it's
it's
good
and
nice.
But
if
you
have
a
user
base,
where
people
don't
have
a
config
access,
you
know
so
that
you
have
that.
Like
a
segregation
where
you
have
like
the
instance,
operators,
where
they
deploy
and
and.
E
And
you
have
the
maintainers
who
maintain
the
instance,
and
there
is
a
kind
of
a
disconnect
then,
and
it
was
difficult
for
those
people
to
update
the
configuration
because
they
don't
have
access
to
the
configuration
and
yeah
it
took.
A
lot
of
you
know
efforts
to
to
maintain
this
thing,
because
you
need
always
to
restart
the
instance,
and
you
cannot
restart
instance
and
then,
if
something
is
wrong
in
the
configuration
instance,
what
good
up.
So
it
was.
A
A
E
Go
now
it's
really
it's
a
nice
feature.
You
just
go
there,
go
in
the
config
set,
your
open,
connect,
credentials,
copy
and
paste,
it
click
on
save,
and
it
just
works.
You
know,
and
if
you
need
to
do
it
with
a
config
file,
then
you
have
like
you
need
to
create
a
pipeline
where
you
have
the
config
you
have
to
check
in,
you
have
to
deploy
it
and
then
get
restarted,
and
then
you
can
see
if
it's
working
or
not
so
you
know
the
process
is
now
will
be
instead
of
10
seconds.
E
A
E
And
so
it
would
be
nice
if
there
would
be
two
options
to
do
so.
You
know,
like
you,
have
a
configuration
and
and
the
ui
for
example,
where
you
could
do
the
changes.
Are
you.
E
A
A
E
And
there
would
also
be
people
that
would
say:
okay,
we
we
don't
want
to
do
this.
We
we
are
happy
with
configuring
it
once
in
a
while
manually,
yeah.
D
E
E
E
D
Yeah
yeah,
that's
that's
also
our
plan
so
yeah
make
sure
you
attend
next
community
meeting
and
hopefully
you
know
we
can
discuss
together
and
finalize
the
proposal.
Okay,
happy
help.
A
Any
other
questions
or
concerns
things
you'd
like
to.
F
See
yeah
can
I
talk
yeah,
sure,
okay,
so
hello.
Last
two
weeks
ago
I
appeared
creepily
and
didn't
said
anything.
So
I'm
sorry
for
that.
F
This
was
a
requirement
and
also
because
we
have
security
concerns,
and
there
are
some
features
with
notary
that
we
want
to
integrate
and
yeah.
F
So
currently
I
manage
this
service
and
I
I
found
some
problems
behind
our
sso
and
it's
very
specific
to
singularity
so
to
use
the
harvard
is
a
full
oci
registry
complaint
repository
so,
and
I
found
some
problems
that
I'm
still
tracking
I
I
rent-
I
I
don't
know
if
this
is
pertinent
to
say
here,
I'm
just
giving
the
point
of
the
situation.
F
I
I
ran
the
the
the
lci
compliant
the
there
is
this
repository
for
just
to
check
the
compliance
of
oci
and
and
behind
our
seo.
We
found
some
problems.
I
created
some
bit
of
issues
to
track
this.
I
am
also
actively
tracking
this
with
the
singularity
theme
because
it
apparently
only
happens
on
their
image
or
software.
F
And
yeah,
that's
it!
I
currently
I'm
upgrading
to
version
2
two,
so
the
new
version
of
auto
release
so
still
no
updates
on
that,
because
internally
we
also
want
to
add
other
stuff.
F
D
I'm
sorry,
I'm
not
sure
if
alex
mentioned
that,
but
we
see
some
performance
issue
in
version
2.2
due
to
the
schema
upgrade,
there
may
be
a
a
situation
that
called
the
you
know
the
hard
work
for
using
a
lot
of
cpu
and
we
already
have
a
fix.
So
maybe
you
can
hold
on
for
a
moment
to
use
2.2.1
and
as
for
the
the
problem
you
are
facing
using
singularity,
I
have
some
impression
about
the
issue,
but
it
will
be
helpful
if
you
can
ping
me
this
usual
number
via
slack.
D
A
It's
cool
thanks,
yeah,
I
know.
There's
no
second
form
is
that
we've
actually
been
working
with
that
team
because
they're,
you
know,
there's
some
fixes
needed
in
dock
distribution
to
be
able
to
fully
pass
that
and
that's
why.
F
I
think
this
is
a
very
weird
issue,
because
it's
not
tied
to
a
single,
I
don't
think
it's
tied
to
a
single
specific
component,
or
there
are
multiple
issues
that
are
affected
the
same
way
because
from
from
gitlab
registry,
which
is
what
we
use
currently,
we
also
see
the
exact
same
error,
so
it
might,
it
might
be
a
problem
with
the
docker
container,
the
docker
registry
or
yeah
yeah.
A
Yeah,
it
could
be
there's
a
couple
fixes
that
we
are
dependent
on
docker
distribution.
For
so
that's
why
we
were,
you
know
actually
participating
in
docker
distribution
upstream,
quite
a
few
things
that
need
to
be
merged.
A
And
yeah,
I
don't
think
we're
testing
singularity.
We
test
singularity
as
part
of
the
2.0
rollout
you
know
just
using.
I
think
we
use
auras
to
test,
pushing
and
pulling
just
very,
very
basic
interaction,
but
we've
not
yeah.
F
Well,
the
issue
didn't
start
here
so
to
resume
this
very
quickly.
We
tried
to
push
singularity
outside
of
without
having
the
an
oci
authority,
authorization
and
it
works,
but
with
the
oci
it
fails,
it
might
be
a
configuration
problem
on
my
side,
but
I'm
pretty
sure
that's
not
the
case,
because
we
can
use
harm
for
image
image,
uploads
of
email
models
it
works,
but
not
for
singularity
and
the
client.
F
The
client
that
they
use
to
to
manage
to
interface
with
the
registry
is
the
same.
So
it's
still
it's
still
a
bit
fuzzy
because
and
then
I
found
out
that
I
I
ran
this.
I
ran
this
registry
conformance
and
I
found
out
that
there
was
a
post
method
that
failed,
a
simple
post
and
it
actually
it
made
sense
with
the
error
that
I
was
observing
and
so
yeah.
This
is
where
we
stopped.
F
F
To
a
compliant
artifact
level,
yeah,
and
if
you
follow
the
the
singularity
issue,
it's
all
there
so.
A
Yeah
cool
we'll
definitely
take
a
look.
Can
you
also
put
your
like?
How
do
I
reach
out
to
you?
What's
your
email.
A
D
A
E
A
You
know
over
the
past
like
year
or
two
we've
seen
lots
of
people
from
cern
showing
up
to
harvard
community
meetings,
which
is
really
great.
I
think.
A
A
Cool,
that's
all
I
have
for
today.
I
think
jonas
already
talked
about
the
pr
for
the
project
maintainer,
which
I
think
like
I'm.
A
All
right
well
looking
in
here,
if
there's
something
else,
thanks
everyone
for
attending,
see
you
in
two
weeks.
Thank
you.
Thank
you.