►
From YouTube: Harbor Community Meeting 20190828 - Americas Time zone
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
this
is
the
community
meeting.
Oh
it's
about
half
an
hour
long,
I'm
gonna
be
hosting
this
one.
So
I'm
the
pm
for
harper
michael,
is
also
on
the
line.
You
know.
B
A
Is
great
for
high
level
things,
but
for
the
day-to-day
harbor
issues
or
details
the
future
items,
you
know
you
guys
can
ping
me
it's
my
github
in
my
email.
A
The
agenda
for
today
is
I'll.
Do
a
quick
committee
update
I'll
talk
about
two
key
integrations
that
we
have
going
on
and
then
stephen
jin,
who
is
the
engineering
manager
for
harper,
will
do
an
update
on
the
1.9
progress
and
then
I'll
do
a
quick
preview
of
some
things
we're
working
on
for
1.10?
A
And
finally,
we
have
enough.
We
have
a
demo
by.
I
hope,
I'm
pronouncing
this
martin
joel
he's
gonna
be
demoing
a
harbor
rpm
that
he
built
just
make
sure
everyone
can
hear
me
right.
A
A
Okay,
so
the
first
thing
that
I'm
gonna
talk
about
is
gitlab
support,
so
we
have
some
ongoing
discussions
with
gitlab
to
possibly
leverage
harvard
at
harbor
as
the
underlying
image
registry,
so
subgitlab,
starting
in
version
8.8.
They
have
the
lab
container
registry
that
they
built
on
top
of
the
native
docker
registry.
A
So
if
you're
using
gitlab
or
you're
using
gitlab.com,
you
have
docker
container
registry
enabled
by
default.
So
but
some
of
the
value
the
value
adds
that
we're
trying
to
work
that
we're
pitching
are
better
identity
managers,
for
example,
id
support
with
rpac.
We
have
vulnerability
scanning.
A
In
addition
to
you
know
enabling
gitlab
and
one
of
the
issues
that
came
out
while
we
were
you
know
talking
about
gitlab-
was
improvement
on
the
current
gc
process,
so
harbor
supports
on
gc
over
the
last
two
releases.
We
made
some
improvements
to
create
online
garbage
collection,
which
makes
it
possible
to
run
garbage
collection
without
having
to
bring
down
harbor
like
any
older
versions
right.
So
previously
you
had
to
take
down
harbor
log
into
the
registry
and
execute
the
gc.
A
A
So
that's
also
one
of
the
things
that
gitlab
was
pushing
for
you
know,
so
we
can
keep
pushing
to
the
industry
as
gc
is
being
run,
but
you
know
that
there's
a
quite
a
bit
of
complexity
there
in
terms
of
how
you're
calculating
the
affected
blobs
the
layers
in
any
gc
operation,
so
you'd
have,
to
you
know,
lock
out
a
much
smaller
set
than
the
entire
project.
So,
if
anyone's
interested
in
working
on
this,
please
let
me
know
we
have
some
ideas
but
possibly
involving
making
some
changes
to
the
upstream.
A
But
you
know
we
have
some
conversations
with
gitlab
some
conversation
with
docker,
so
if
anyone's
interested,
please
let
me
know.
B
Hey
alex
yeah
just
a
question:
do
we
have
an
issue
tracking
this,
so
people
can
pile
on
there.
A
Yes,
I
will
share
those
after
needing
that
we
have
an
issue
on
github
for
this
yeah.
A
The
second
issue
that
we
or
the
second
integration
that
we've
worked
on
is
using
data
dog
to
as
a
monitoring
or
alert
system
for
hardware,
so
datadog
essentially
monitors
hardware
through
the
use
of
a
datadog
agent.
It
comes
with
graphics
and
a
robust
query
language,
so
we
already
released
our
health
health
api
in
version
1.6.
A
You
know
which
is
fairly
basic,
but
it
has
the
it
reports
on
the
status
of
the
the
core
components
like
carbon
drop
servers,
harbor,
clear
db,
harbor,
core
etc,
but
you
can
use
datadog
to
auto
track
the
status
of
these
health
checks,
and
there
are,
you
know,
dashboards
and
graph
graphic
graphing
capabilities.
On
top
of
that,
you
can
also
track
the
capacity
of
hardware's
image
database.
You
know
how
the
size
of
the
the
actual
users
correlate
to
the
underlying
storage
capacity
of
your
system
of
your
node.
A
That's
something
we
don't
have,
but
that's
something
we
want
to
do
for
a
long
time
and
then
you
can
visualize
either
a
projected
disk
usage
analysis
that
data
dog
can
perform
based
on
current
usage
based
on
you
know,
they
have
some
calculation
calculation
engines
for
that.
So
this
is
something
that
they
did
entirely
on
their
own.
I
believe,
and
they
reached
out
to
us
to
show
us.
You
know
what
they've
finished.
A
I
wasn't
aware
of
this
of
this
project
before
they
reached
out
to
us,
but
it's
fairly
interesting
and
you
know
we-
the
harvard
team
has
also
been
looking
at
building
and
monitoring
with
prometheus.
So
the
downside
to
datadog,
obviously,
is
that
it's
a
commercial
solution
to
pay
for
it.
But
for
me
this
is
a
very
popular
open
source
solution,
so
we
I
have
a
ticket
on
here.
A
I
should
do
that
for
the
previous
one
as
well,
so
just
looking
for
community
input
on
what
kind
of
metrics
everyone
wants
to
be
captured,
so
you
know
just
having
to
adopt,
doesn't
mean
that
we're
gonna.
You
know
that's
the
only
solution
I
think
prometheus
is
still
very
popular.
It's
very
promising
and
you
know
for
any
monitoring
system.
The
most
important
thing
is
alerts
and
notifications.
You
know,
graphs
are
cool,
but
I
think
event,
driven
systems
right
as
opposed
to
polling
is
always
preferred.
A
So
something
robust
something
that's
event,
driven
we're,
definitely
interested
in
so
but
there's
a
there's,
a
blog
post
here
at
the
very
top
that
talks
about
you
know
what
they
built
for
harper:
how
to
deploy
some
of
the
features,
some
of
the
metrics
that
you
can
visualize
and
things
like
that.
A
A
So
those
are
the
two
community
features
that
I
wanted
to
go
over.
I
think
steven
do
you
want
to
talk
about
the
1.9
release
and
where
we
are
at.
C
C
So
we
can
just
use
your
screen,
so
I'm
just
talking
some
several
bullets
so
first
and
so
all
the
feature
lists
here,
including
cv,
writers,
calculation
management,
rfc
and
we
are
reaching
the
last
round
of
testing
to
reach
to
rc,
and
we
will
have
last
testing
day
on
this
friday.
C
Based
on
the
testing
results,
we
will
declare
whether
whether
we
will
go
out
rc
or
not
on
this
friday
and
right
now,
so
we
have
less
than
20
bucks,
currently
our
pipeline
to
be
fixed
and
we
plan
to
fix
all
those
bugs
and
so
for
the
friday
testing
day.
C
If
anyone
in
the
community
also
increases
on
the
early
build-
and
we
also
can
share
on
the
wechat
and
also
the
site
channel,
to
allow
you
to
get
some
early
access
to
help
us
to
test
and
build
and
based
on
current
estimation,
so
we
may
adjust
the
ga
date
later,
one
record
one
or
two
weeks
so,
based
on
the
testing
results,
we
will
give
you
more
updates.
D
Stephen
and
everyone-
that's
that's
great
progress.
You
know
super
excited
to
see
1.9
almost
out
the
door.
I
think
that's
you
know.
This
is
a
significant
release
with
a
lot
of
features
that
are
going
to
make
it
easier
for
customers
to
manage
hardware
so
really
well
anticipated
and,
like
steven
mentioned,
if
there's
anybody
that
wants
to
kind
of
help
us
test
and
and
get
this
release
out
and
pulse
them
back
so
validate
some
scenarios
would
love
to.
Have
you
guys
involved
that'd
be
great.
A
So
now
I
will
briefly
talk
about.
What's
in
the
works
for
1.10,
so
1.10
is
a
1.9
goes
out
the
door
first
week
of
september,
and
then
we
start
more
time
immediately
after
I
think
that's.
This
is
slated
for
right
before
it
could
become
san,
diego,
so
on
november,
15th
to
november
16th.
A
A
So
we
have
a
couple
of
features
here
that
I
want
to
go
over.
I
think
that
are
pretty
cool.
The
first
one
is
pluggable
scanner.
I
think
we
talked
about
this
on
either
the
last
meeting
or
the
one
before
that,
which
is
you
know,
to
support
additional
scanners
like
aqua
and
anchor
in
addition
to
the
default
clear,
so
we
want
to
give
users,
you
know
the
ability
to
use
the
scanner
of
their
choice
because
all
the
scanners
right
now
they
have
different
value-add
propositions.
A
They
have
you
know
scanning
out
a
certain
image,
like
all
the
different
scanners
return
different
vulnerabilities-
and
you
know
they
have
different
policy
engines
on
top.
So
we
don't
want
to
limit
hardware
users
to
just
being
able
to
use
clear,
so
this
is
going
to
be
released
for
1.10
proxy
cache
is
another
feature
that
we're
working
on
right
now,
which
is
support
deploying
a
proxy
cache
approx,
pull
through
cache
to
cache
images
for
substance,
calls
substance,
poles,
traversing
locally,
as
opposed
to
going
over
a
network
over
and
over
again.
A
A
So
you
know
the
target
images
in
the
target.
History
could
also
sit
in
the
central
private
registry
right,
you
know
so,
for
whatever
reason
it's
the
connectivity
is
an
issue
here,
and
so
currently
you
have
to
either.
A
C
A
Is
to
create
a
proxy
cache
that
pulls
once
caches
it
and
then
for
all
subsequent
indications
it
would.
It
would
just
replicate
the
image
in
the
cache,
so
we
so
the
doc
registry
which
we're
leveraging
right
now
support,
has
a
proxy
mechanism,
but
it
only
supports
docker
hub,
so
we're
looking
at
some
solutions
either
making
some
changes
on
the
upstream
or
implementing
our
own
http
caching
mechanism.
A
I
I
also
have
a
ticket
for
this,
and
I
also
have
a
prd
for
this,
and
I
will
share
those.
I
will
link
them
on
the
ppt
and
so,
if
anyone's
interested.
Also,
just
please
make
your
comments.
A
So
immutable
tags
is
another
feature
that
people
have
been
asking
for
for
a
long
time.
I
think
it
goes
back
to
december
of
2017
and
just
you
know
periodically
on
github
every
so
often
someone
would
add
a
comment.
So
this
is
an
ongoing
request
and
we
finally
have
time
to
get
to
it
for
the
olympian
reviews-
and
this
is
basically
another
image-
life
cycle
management
feature
goes
very
long.
It
goes
along
very
nicely
with
tag
retention
and
quota
management.
A
It's
basically
the
ability
to
configure
certain
projects
with
those
tags
as
immutable
to
prevent
images
from
being
overwritten
right.
So
the
doctor's
implementation
is,
you
know
it
doesn't
enforce
the
image
tag
in
the
image
digest.
Mapping
whenever
you
push
a
new
tag,
basically
points
to
a
different
image,
and
so
we
want
to
prevent
this
kind
of
behavior
for
certain
releases.
A
A
So
this
is
what
the
feature
aims
to
solve
and
finally,
we
have
group
support
for
oibc,
so
this
allows
harper
admin
to
assign
rules
to
an
entire
group
when
why
dc
when
using
oitc
login,
so
everyone
in
the
group
a
gets
assigned
to
gets
assigned
to
a
role
developer
so
essentially,
when
users
log
into
oitc
a
via
an
oid
identity
provider,
the
identity
token
returned
by
the
provider
should
return,
should
con
should
contain
the
groups
claims
which
has
the
names
of
the
groups
the
user
is
a
member
of,
and
so
the
user
will
inherit
permission
set
of
those
already
configured
groups
for
the
set
projects.
A
So
we
have
an
idea
of
how
to
do
this
feature
pretty
much,
but
we
want
to
authenticate
or
we
want
to
test
this
feature
against
popular
identity
management
management
providers
like
eclock
octa
off
zero.
So
if
anyone
has
an
environment
that
has
one
of
these
identity
management
providers
already
set
up,
please
let
me
know
so.
We
can
test
against.
A
A
So
these
are
the
features
for
1.10
that
I
wanted
to
go
over
some
of
the
big
ones.
Is
there
anything
else
you
want
to
add
to
this.
D
Michael
nothing
for
me,
I
guess
a
call
to
action
here
for
the
folks
in
the
community.
If
there's
something
here
that
you
guys
are
super
interested
and
you
can
come
in
and
help
like
alex
mentioned
earlier.
Please
come
you
know
I
would
use
you
know
we
want
to
involve
more
and
more
members
of
the
community
were
fairly
open.
D
B
Hey
alex,
I
got
one
question
regarding
the
the
plugable
scanners,
so
the
plan
is
to
support
aqua
and
encore
image
scanners
as
auto
tree
solutions.
Claire
will
stay
entry
for
now
right.
A
So
for
the
for
the
1.10
release,
yeah,
there's
not
going
to
be
any
behavioral
change
for
clear
users.
Let's
deploy
the
same
one
yeah,
so
I
think
overnight,
two
or
three
releases
it'll
become
mountain
tree
just
like
bonker
and
aqua,
okay,
cool.
Thank
you.
D
Yeah,
so
I
will
add,
and,
and
more
importantly,
sorry
alex
more
importantly,
you
know-
there's
a
lot
of
folks
are
using
clear
today,
right,
basically
every
hardware
user.
So
when
you
upgrade
you
want
to
make
sure
that
there
is
absolutely
zero
behavioral
change,
so
things
will
work
just
like
they
did,
but
we
will
provide
the
option
if
someone
wants
to
add
in
agua
or
anchor
because
they
have
a
commercial
agreement
with
them.
They
want
to
use
their
scanners
for
for
providing
additional
context
into
into
the
image.
Then
they
could
so
it's
an
option.
C
Yes,
sorry,
I
want
to
add
a
comment,
especially
for
the
proxy
cache
feature.
So,
given
that
we
have,
as
many
alex
mentioned,
there's
a
very
short
period
develop
time,
we
want
release
harbor
1.10
and
if
anyone
are
interested
in
this
feature,
we
are.
You
are
very
welcome
to
help
us
contribute
on
this
feature.
This
is
a
complex
feature
and
we
will
we
would
love
to
work
with
you
to
deliver
this.
A
Yeah,
the
proxy
cache,
the
the
current-
I
guess
limitation
is
that
you
know.
If
we
were
to
leverage
docker,
docker
registries
implementation,
it
only
supports
docker,
dot,
io
there's
also.
You
know
there
are
a
couple
tickets
on
the
docker
distribution
github
on
moby
distribution,
github
that
people
have
been
clamoring
for
support
for
additional
private
industries.
A
You
know,
but
the
the
merging
process
is
very
slow
right
for
whatever
it's
taking
a
year
and
there's
still,
I
guess
it.
It
means
a
lot
of
changes
not
just
to
docker
industry
but
to
the
client
engines.
So
it's
not
yeah.
A
I
mean
that's
that's
unfortunate,
but
the
other
option
would
be
to
come
up
with
something
ourselves,
but
you
know
there's
a
lot
of
a
lot
of
work
in
that
in
terms
of
how
interface
with
the
storage,
the
the
caching
mechanism,
so
I
think
I
don't
know-
I
don't
really
have
a
too
strong
of
an
opinion
on
this
right
now,
but
I
hope
it
seems
easier
to
just
leverage
docker
registries,
proxy
cache,
but
yeah
we're
definitely
interested
in
hearing.
A
You
know
what
the
community
thinks
and
so
yeah
I'm
gonna
put
links
up
to
I'm
gonna
put
links
to
the
the
github
tickets,
as
well
as
the
product
requirement
documents
that
we've
been
writing
on
this
ppt
at
the
end
of
the
presentation,
so
anyone
who's
interested
can
feel
free
to
comment.
A
Definitely
the
group
support
for
idc.
You
know,
if
you
have.
You
know
not
just
these
three
key
cloak,
optin
all
zero.
If
you
have
others,
I
think
the
the
idea
is
the
same.
The
way
basically
harper
would
have
to
submit
the
scopes
of
the
groups,
claims
that
it
wants
and
the
identity
provider
would
have
to
surface
these
up
to
the
harbor.
A
There's
there's
pot,
I
mean
there
might
be
some
differences
in
the
implantation
and
how
they
surface
the
the
group's
claims.
So
we
want
to
test
this
approach
against
different
identity
providers
so
yeah.
If
anyone
has
an
environment
or
just
interested
in
discussing
this,
please
let
me
know,
as.
A
Well,
so
the
last
portion
is
a
demo
by
martin
there's,
a
user
in
denmark,
I
believe,
or
outside
of
somewhere
outside
of
copenhagen,
and
he's
going
to
demo
the
harbor
rpm
that
he
built,
which
is
a
so
the
hardware
registry
packaged
in
rpm,
designed
to
be
installed
right
now
on
either
red
headline
x7
or
san
os.
But
I
hear
it's
easily
portable
to
other
linux,
distros
right,
so
I'll
stop
sharing
and
then
I
will
hand
it
over
to
martin
sure.
Can
you
guys
hear
me
yeah?
Let
me
make
you
a
host.
E
Can
you
guys
see
the
presentation
yep
yeah?
Well,
thank
you
alex
for
inviting
me
in
well,
I
yeah,
as
you
said,
I'm
I'm.
I
see
consultants
out
of
alberta
in
denmark.
We
do
a
lot
of
open
source
consultancy
with
yeah,
basically
everything
from
red
hat
to
susan
linux,
ubuntu
everything
and
all
the
products
around
that.
E
But
in
this
I've
done
the
the
rpm
packaging
of
of
harbor,
and
I
wanted
to
just
go
over
a
few
points
of
why
I
did
it
and
how
I
did
it
and
then
I
think,
maybe
on
a
another
meeting,
we
can
do
a
technical
presentation
if
anyone's
interested
in
a
deep
dive
on
what
I
did.
E
So
the
idea
here
in
general
containers
versus
packages
is
that
it's
not
better,
it's
just
different,
so
it
it
comes
down
to
that.
We
have
a
lot
of
customers
and
they
have
different
demands.
Some
of
them
have
highly
complexed
kubernetes
docker
installations,
which
are
completely
ready
to
run
these
infrastructure
kind
of
products.
E
We
also
have
customers
who
have
smaller,
just
basic
docker
systems
which
are
really
not
prepared
for
running
infrastructure
just
yet,
but
you
just
use
for
development.
Also.
We
have
customers
who
are
running
different
kind
of
flat
forms
and
don't
wanna
be
stuck
on
one,
so
they
prefer
having
their
infrastructure
components
based
on
on
virtual
machines.
E
In
the
middle
of
this
we're
doing
a
lot
of
right
now
we're
doing
a
lot
of
open
shift,
but
we're
also
doing
kubernetes
and
dog
and
docker
swarm,
and
we
really
needed
something
that
we
could
use
for
the
for
the
customers
to
present
less
as
a
registry
and
scanning
utility.
E
E
E
So
we
wanted
to
do
something
in
between
which,
where
we
could
use
as
a
point
where
they
could
push
images
and
we
could
from
there
on
deploy
them
on
or
we
could
do
all
this
deployment
from
there
so
when
they
pushed
it
would
be
pushed
onto
our
internal
registry.
In
you
know,
okd.
E
We
also
needed
some
user
management
and
some
vulnerability
scans.
So
that
was
why
we
looked
into
to
the
harp
project,
because
we
thought
that
it
was
really.
It
was
all
the
stuff
we
needed
and
not
much
more.
So
we
also
had
a
look
into
stuff
like
like,
like
the
artifactory
with
the
x-ray,
but
it
was
just
too
big
of
a
product
to
do
these
simple
operations,
and
this
is
also
what
a
lot
of
our
customers
think.
E
Of
course,
if
you
guys
have
any
any
questions,
just
just
drop
in
so
moving
harbor
to
rpm
they're
kind
of
there,
quite
a
few
multiple
services
in
in
harbor,
and
of
course
these
have
to
be
built
into
separate
spec
files.
So
it's
easier
to
to
keep
it
keep
the
system
modular.
E
E
E
Also,
I
had
to
move
around
the
the
data
locations
because
it's
it's
it's
its
structure
was
not
really
purposeful
for
a
full
machine
installation.
It's
it's
really
simple
in
in
the
docker
containers,
but
it's
it's
more
simple
to
have
it
in
the
normal
locations
for
our
full
system.
E
From
there
come,
the
tech
came,
the
testing
phase
faced
a
lot
of
issues
when
you
don't
know
all
the
systems
to
the
bottom,
you
have
to
to
learn
it.
So
I
had
a
lot
of
permissions
issue,
especially
with
as
linux,
to
get
it
working
there,
but
I
still
see
that
as
a
linux,
it's
really
important
to
keep
the
the
containment
from
the
different
services.
E
So
you
could
also
expose
these
dog
registry
without
being
afraid
that
it
might
compromise
some
of
the
other
services
also
getting
into
the
docker
registry
how
it
works.
You
guys
talked
about
how
how
difficult
it
is
to
integrate
with
other
services
just
to
understand
how
it's
integrated
into
harbor
took
a
lot
of
time,
and
also,
as
mentioned,
I
had
to
deconstruct
the
config
image
from
harbor
and
figure
out
how
it
actually
ran
and
deployed
and
generated.
E
Certificates,
so
in
the
end
I
ended
up
with
with
some
rpms
and
also
did
a
short
ansible
playbook
that
actually
installs
and
configures
all
of
harper
with
the
rpm
installation.
E
So
if
any
of
you
guys
are
interested,
you
can
just
use
the
the
ansible
playbook
to
to
deploy
it
on
a
virtual
machine,
as
alex
said
right
now
is
sensors
and
well
is
supported,
but
it
will
probably,
I
will
probably
also
include
well
8
and
census
8
when
when
census
8
is
available,
but
I
still
have
some
stuff
to
do.
I
still
know
that
there's
probably
some
issues
hiding
which
I
haven't
figured
I've
found
out.
E
E
I
will
include
some
firewalls
rule
in
the
playbook
later
on.
Unless
and
at
last
I
want
to
probably
separate
the
users,
so
we
have
separate
users
for
for
for
each
service
right
now.
They're
all
running
under
the
hardware
user,
but
to
get
better
security
and
segmentation.
I
would
like
to
to
split
up
your
users,
but
that
is
something
I
would
will
continue
to
improve
in
the
future.
E
So
that
was
a
quick
introduction
to
to
the
happy
rpm
project
and
I
hope
some
of
you
guys
have
the
time
or
the
interest
to
to
look
into
it.
If
you
have
any
questions,
please
feel
free
to
contact
me.
A
Okay,
yeah,
just
how
would
someone
get
in
touch
with
you
if
they're
interested
in.
E
Well,
you
can
either
in
the
you
gotta
do
an
issue
inside
the
the
github
project
for
the
for
the
ansible
playbook.
That
would
be
the
easiest
part.
I
will
also
write
it
in
there,
but
else
I
can
I
can
put
up
my
mail.
I
can
send
you
my
my
mail
to
you
alex.
Then
you
can
put
up
in
the
in
the
rest
of
me
for
the
for
the
meeting.
Then
people
can
type
there
directly.
A
Okay,
great,
thank
you.
This
is
very
cool.
Why
did
you
pick
harbor
when
you
were?
I
guess,
investigating
a
registry
solution.
E
It's
the
rail
history,
functions
it's
the
user
management
and
it's
the
scanning
stuff
and
of
course
you
need
to
you.
Also
in
some
cases
need
the
replications
and
the
api.
But
a
lot
of
the
other
registries
out.
There
are
highly
complex
and
does
a
lot
of
other
stuff
or
like
it's
it's
like
artifactory,
which
basically
serves
everything,
but
I'm
little
into
the
I'm,
not
sure
that
everything
is
good.
When
you're
talking
about
registries,
I
just
want
something
that
does
what
we
need
really
well
and
that's
why
I
chose
harbor.
E
I
know
a
new
claire
beforehand,
so
it's
really
nice
to
see
that
harper
used
that
internally,
it's
great
to
hear
that
you
guys
are
also
discussing
other
possibilities
in
the
future.
I
hope
you,
you
will
also
maybe
con
consider
some
of
the
scannings
for
for
for
the
internal
codes
such
as
sonar,
cube
or
stuff
like
that.
E
But
it's
it's.
I
think
it's
a
really
great
little
project,
which
it
really
does.
What
it's
supposed
to?
E
E
It's
it's,
I
think
it's
just
as
easy
as
the
as
the
docker
installation
in
in
in
the
docker
installation.
You
basically
just
have
to
roll
out
a
new
image
here.
You
just
basically
have
to
run
a
yum
update
and
it
automatically
updates
the
rest
of
the
of
the
sql
stuff
and
it's
all
the
migrations
are
implemented.
So
everything
should
be
done
in
basically,
a
yum
update.
F
E
Basically,
right
now
in
in
in
denmark's
and
the
nordics,
we
are
still
opening
up
the
the
whole
container
platform.
Not
many
people
actually
have
a
real
container
platform,
so
the
we
we're
actually
trying
to
get
a
little
bit
of
head
ahead.
A
So
that's
that's
all
I
had
planned
for
today.
Just
you
know
wrapping
up
1.9
at
by
the
end
of
next
week
and
our
end
of
beginning
of
next
week.
You
know
some
of
the
things
that
we're
working
on
for
1.10,
the
gitlab
integration
and
datadog,
and
you
know
we're
looking
at
prometheus
any
of
those
issues.
If
anybody
wants
to
contribute
or
have
any
questions,
please
just
reach
out
to
me.
So
again
I
will
update
the
ppt
with
the
links
the
product
requirement,
documents,
et
cetera.
C
Yeah,
so
just
what
let
the
community
user
know
that,
so
we
have
delivered
1.8.2
patch.
So
if
you
want
to
have
some
bad
faces,
you
can
look
at
the
note
of
1.8.2.
You
can
put
it
to
our
related.
A
A
Okay,
that's
it
for
us.