►
From YouTube: Tweety - a service for controlling Lotus canary nodes
Description
A
Hey
everyone.
Let
me
share
my
screen.
A
So
today
I'm
gonna
be
talking
about
tweety.
So
what's
tweety,
it's
basically
this
web
service
that
we
built
at
the
end
of
last
year
around
december
to
control
lotus
cannery
nodes.
So
you
can
find
it
on
the
falcon
project
organization
under
the
twitty
repo.
So
what
is
this
about?
A
A
We
used
twitty,
for
example,
to
compare
the
split
store
functionality
which
is
going
to
be
merged
and
released
under
feature
flag
in
the
near
future
versus
the
lotus,
stable
release,
so
yeah
what
is
tweety?
Basically,
it's
a
web
service
that
very
easily
installs
lotus
notes
on
kubernetes
cluster,
always
on
the
same
hardware,
with
the
same
disks
at
the
moment,
so
you're
pretty
much
getting
the
same
hardware
performance
from
different
nodes.
A
So,
for
example,
you
can
either
start
syncing
a
node
from
from
snapshot
on
your
version
of
lotus
or
you
can
try
to,
let's
say,
upgrade
the
release
version
of
lotus
or
any
other
version
to
your
particular
version,
and
also
that's
kind
of
made
available
made
possible
by
the
fact
that
we're
building
docker
images
of
every
single
lotus
version,
which
you
can
find
them
they're
publicly
available
on
this
url
or
you
can
go
to
lotus,
and
you
can
just
check
the
build
circle.
A
Ci
configuration
to
get
the
same
data,
but
you
can
pretty
much
pull
yeah
any
version
of
lotus
that
has
been
built
over
the
last
like
one
or
two
months
so
yeah.
This
service
supports
like
creating
a
cannery
upgrading
one
installing
from
snapshot
and
deleting
a
calorie,
that's
pretty
much
what
it
has
at
the
moment
and
for
those
of
you
who
have
played
a
little
bit
with
kubernetes
infrastructure.
A
A
This
makes
it
easy
for
you
not
to
have
to
start,
let's
say
a
vm
on
amazon,
install
the
software
and
all
that
just
with
one
command.
You
can
start
the
lotus
note
and
make
sure
that
it's
that
your
pull
request
is
working,
so
how
how
to
use
it.
A
You
basically
need
to
start
tweeting
and
you
just
issued
like
a
json
rpc
request,
with
the
release
name
for
the
canary
that
you
want
to
deploy
or
that
you
want
to
upgrade
and
with
the
new
image
name,
and
also
it's
at
the
moment
quite
rough
around
the
edges
in
terms
of
acl
and
permissions.
So
it's
kind
of
trusting
the
inputs
that
are
coming
up
in
terms
of
like
ownership
of
those
carries.
So
at
the
moment,
if
two
people
try
to
use
the
same
canary,
yeah,
they're
gonna
have
a
bad
day.
A
That's
something
that
we
need
to
work
on
in
the
future
to
improve
so
yeah.
Let's
do
a
quick
demo
at
the
moment
I
have
twitty
running
in
this
window
and
basically
I
have
the
readme
open
in
another
window
and
also
monitoring
the
kubernetes
cluster
down
at
the
bottom,
and
we
can
see
that
there
are
like
two
canaries
running
at
the
moment,
so
I'm
just
gonna
go
ahead
and
delete
one
of
them
just
so
that
we
have
space
for
a
new
one.
A
So
I'm
gonna
delete
cannery
one
and
hopefully
issuing
this
rpc
request
is
gonna
work
out.
It's
going
to
just
trigger
like
a
hem
command
to
delete
the
canary
and
yeah.
The
canary
is
being
terminated
at
the
moment
in
this
screen.
A
It's
returning
currently
on
error
because
we're
still
leaving
behind
volumes,
they
need
to
be
manually,
cleaned
and
that's
something
that
needs
to
be
improved
in
the
future.
Whoop,
that's
incorrect,
an
old
version.
So,
okay,
we've
removed
one
of
the
canneries.
How
do
we
create
a
new
canary
I'm
just
going
to
give
it
like
a
new
new
name,
for
example
lotus
4
and
I'm
gonna
release
this
version
from
this
commit?
A
I'm
not
sure
what
this
commit
is.
So
I'm
just
gonna
release
latest
master
from
lotus,
which
I
believe
is
this
comment.
Yeah
fix
rise
that
magic
merged
like
two
days
ago.
A
It
basically
schedules
the
application
generates.
The
jwt
tokens
assigns
a
volume
for
that
canary
and
then
starts
it
and
on
twitter
we
can
see
some
useful
log
lines
to
basically
see
who
is
doing
what
we're
also
printing
out
the
exact
home
command,
so
that
folks,
who
are
monitoring
this,
can
understand.
A
What's
going
on
and
yeah,
basically
also
the
film
output
out
of
it,
and
we
can
see
that
now
the
lotus
four
canneries
working,
we
can
check
the
logs
for
it
and
we
can
see
that
it's
downloading,
I
guess
the
params
or
the
snapshot
at
the
moment.
A
A
That
already
has
prometheus
and
grafana
installed
and
based
on
naming
tool
to
discover
all
the
metrics
and
be
able
to
build
dashboards
and
visualize
that
we
still
haven't
gotten
around
to
doing
that,
but
it
should
be
fairly
straightforward
when
we
deploy
tweak
into
a
existing
kubernetes
infrastructure.
A
We
also
want
to
build
some
feature
flags
so
that
developers
are
able
to
actually
pass
their
feature
flags
to
lotus
and
not
just
like
create
like
vanilla,
lotus
nodes,
because
when
we
release
new
features,
maybe
some
of
them
will
be
experimental,
so
that'll
be
a
good
way
to
test
them
yeah.
So
that's
pretty
much
all
I
have
for
you
today,
yeah.
The
repo
is
located
here.
A
If
someone
finds
this
useful
yeah
talk
to
me
so
that
we
see
how
to
make
it
better-
and
hopefully
this
will
be
available
to
more
folks
around
the
organization
soon
so
that
you
can
yeah
easily
start
lotus
notes
and
compare
different
functionalities
between
the
main
release,
branch
and
yeah
experimental
features.
A
B
A
Yes,
so
canary
notes
are
running
like
lotus
from
master
while
they're,
not
in
use
and
they're
thinking
towards
the
magnet.
B
So,
and-
and
so
can
you
make
them
do
anything
or
like?
Is
there
any
kind
of
scripting
or
stuff
would
you
did
they
just
run
and
you
just
make
sure
that
they
don't.
A
The
idea
is
for
them
to
sync
mainnet
and
to
execute
the
chain
and
for
you
to
compare
if
your
new
code
is
performing
as
well
as
the
old
code.
Okay.
So
before
we
release,
let's
say
a
new
version
of
closed
to
be
able
to
check
it.
A
No
at
the
moment
it's
just
running
vanilla,
lotus
note
I
mean
you
can
connect
to
it
and
you
can
do
extras.
A
Let's
say
you
can
test
the
deal
functionality
with
it,
but
by
default
it's
just
thinking,
magnet
and
yeah
executing
the
chain
and
making
sure
that
it's
still
performant
as
the
release
version,
so
that
when
we
release
a
new
version,
it
doesn't
use
more
resources
or
something
like
that.
C
May
I
just
quickly
step
in
because
we
also
like
I
was
part
of
the
conception
of
this
project.
One
of
the
one
of
the
things
that
that
we
wanted
to
create
here
was
or
give
developers
the
the
ability
to
do
is
basically
hey.
I
have
cooked
up
a
new
feature.
I
have
cooked
up
a
chain,
a
change
on
some
particular
feature.
C
I
want
to
launch
a
note
that
is
sinking
from
a
snapshot
or
that
is
syncing
from
an
existing
store,
so
it's
like
already
up
to
date,
potentially
with
the
chain
right,
and
I
want
to
just
put
it
running
against
the
network.
So
I
can
test
certain
like
anything
that
I
want
against
it
right.
C
So
it's
a
fully
synced
up,
node
that
I'm
not
gonna
have
to
run
on
my
on
my
machine
and
ideally
the
the
the
output
of
all
of
this
is
to
have
a
grafana
dashboard
that
is
constantly
showing
metrics
of
these
like
different
streams
and
different
version,
lineages,
potentially
of
lotus,
so
that
you
can
compare
things
with
one
another.
This
was
initially
built
for
for
also
testing
the
split
store
and
other
alternative
block.
A
A
And
so
I
see
a
question
about:
can
we
actually
use
ipfs
for
a
docker
registry?
We
haven't
really
have
the
capacity
to
do
that.
We
just
want
it
like
the
simple,
the
simplest
build
system
that
works
for
everyone
and
yeah
ecr.
The
public
image
is
just
worked
out
of
the
box.
So
that's
what
we
went
with,
but
I
mean
there's
nothing:
stopping
us
from
building
a
docker
registry
on
ipfs
and
yeah,
storing
the
artifacts.