►
From YouTube: Community Standup: 8/13/19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
A
Sure
what
it
stored,
but
I,
stopped
it
and
now
I'm
restarting
it
so
it
should
be
going
so
I'll
repeat
what
I
just
said.
Basically
you
know
we
did.
We
did
a
governance
Docs
review
last
week,
you
know
Lisa
had
put
the
the
initial
drafts
up
on
github
for
review.
Those
are
still
that
you
know
those
are
still
there
and
comments
are
welcome,
I
think
what
we'll
do
in
this
stand
up
today,
it's
primarily
well
initially.
A
What
we'll
do
is
we'll
just
sort
of
give
a
quick
update
on
those
there's
also
been
a
lot
of
time,
more
technical
updates
as
well
to
not
only
the
the
runtime,
but
also
all
the
underlying
software
and
the
curriculum.
So
I'll
cover
some
of
that
stuff
and
we'll
just
open
it
up
to
the
floor
for
anybody
else
to
raise
issues
as
well.
So.
B
Long
story
short
on
the
governance,
talk
I
mean
you
guys
were
working
on
the
call
last
week,
but
and
I
I
did
also
ping
the
rest
of
the
community
of
the
community
sites
about
commenting.
There
was
one
other
comment
catching
catching
the
typo
on
metrics,
but
otherwise
there
happened
and
there
hasn't
been
any
further
commentary
with
I
assume,
like
a
silence,
equals
consent,
but
we
will.
We
will,
as
we
discussed,
wait
until
the
end
of
October,
I'm,
sorry
end
of
August
for
people
to
get
back
from
vacation
before
we
vote
to
finalize
anything.
B
In
the
meantime,
I
have
been
writing
up
a
draft
sponsorship
agreement
which
I
will
be
probably
sharing
with
potential
sponsors,
along
with
our
legal,
perhaps
in
parallel.
Just
I
think
you
know
before
before
legal
you
know
has
anything
official
to
say
about
it.
It
would
be
more
sort
of
an
advisory
discussion.
Kind
of
exercise
with
with
potential
sponsors
just
make
sure
that
that
you
know
what
I'm
sort
of
proposing
have
in
mind
makes
sense
for
everybody
and
so
forth.
B
A
A
So
the
probably
the
best
place
at
the
moment
to
get
a
summary
of
the
changes.
A
lot
of
these
changes
by
the
way
won't
be
a
surprise
to
those
in
the
call
cuz,
we've
kind
of
talked
about
these
a
little
bit,
but
these
are.
This
is
like
the
high-level
summary
and
then
it'll
be
doing
more
blog
posts,
but
dive
deep
into
the
specifics,
but
this
is
sort
of
a
thing
that
ties
them
all
together.
So
there's
there's
three
things
that
have
changed
in
in
this
particular
play:
relief.
A
We're
really
it's
like
three
different
releases
that
were
all
times
to
go
at
the
same
time,
but
there's
three.
Let's
just
say:
there's
like
three
different,
distinct
efforts
that
took
place
that
all
came
to
a
head
last
week
and
are
live
this
week.
The
first
one
is
that
the
energy
Labs
curriculum
itself
and
and
and,
as
you
guys
know,
the
curriculum
is
sort
of
for
entity.
We
sort
of
maintain
the
nlf's
curriculum
separately
from
the
antidote
platform
makes
sense.
There
are
a
lot
of
things.
A
That's
generally
separation
of
concerns,
but
also
it
allows
us
to
keep
the
antidote
kind
of
pure
just
makes
sense.
So
what
we're
doing
now
is
we're
doubling
down
on
that
actually
by
separating
that
out
into
its
own
crit
into
its
own
relief
cycle.
So,
even
though
it
was
a
separate
entity,
it
was
treated
very,
very
much
like
an
antidote
project,
meaning
you
know
when
a
syringe
was
versioned
to
0.40
or
3.2
or
whatever
was
then
the
curriculum
was
version
identically.
A
You
know,
basically,
the
curriculum
had
to
wait
for
the
new
version
of
the
platform
to
come
out
for
the
pheno
for
that
release
cycle
to
finish,
or
vice
versa,
you
know-
and
so
it
was
you
know,
it
was
one
of
the
situations
where
we
really
couldn't
very
easily
release
new
content
to
the
curriculum
without
trying
to
sort
of
release
everything
all
at
once,
and
the
platform
may
may
not
have,
you
know,
usually
wasn't
ready
at
the
time
the
curriculum
was
so
we're
separating
those
things
out.
The
curriculum
will
be
developed
on
its
own.
A
It's
on
its
own
release
cycle.
It's
on
its
own
version.
You
know
schema.
All
of
that
is
totally
separate
and-
and
we
have
a
lot
of
work
to
do
in
terms
of
like
planning
how
that
actually
how
that
actually
gets
done,
because,
of
course
you
have
to
as
you're
as
you're
going.
So
let's
say
you
know,
for
the
next
release
cycles
in
the
curriculum.
We're
gonna
be
like
version
1.1
dozen
here
over,
that
that
plan
will
have
to
target
a
specific
version
of
the
antidote
platform.
A
Now
that
those
two
things
are
separate,
so
we
don't
know
what
to
do
with
from
the
planning
perspective,
but
this
is
a.
This
is
a
very
first
big
step
in
doing
that,
we
have
a
totally
separate
release,
workflow
just
for
the
curriculum
and
there's
a
bunch
of
other
technical
things
that
we
did
in
the
curriculum
that
make
it
more
stable
and
what
kind
of
stuff
okay
and
I
mean?
That's,
that's
not
not
to
mention
the
new
content,
that's
in
system
at
curriculum.
A
Obviously,
that's
that's
something
useful
too,
so
we
didn't
just
this
wasn't
just
a
relief
change.
We
also
introduced
a
lot
of
new
content
and
one
of
those
things
is
at
the
request
of
the
fine
packet
pushers
folks
with
this
introduction
to
bash,
and
you
can
see
that
this
is
live
now.
So
this
is
the
lesson
to
Derek
wrote
three
stages:
I
think
he
has
the
intention
of
adding
two
more,
but
that
will
go
into
the
next
release
cycle
for
the
very
first
version.
This
is
the
this.
Is
the
bash?
A
A
The
if
you
want
to
get
the
like
I
said:
I'm
planning
on
writing
blog
posts
to
dive
into
each
of
these
specifically.
But
if
you
want
to
just
cheat
and
take
a
peek,
you
can
always
just
look
at
the
change
log
in
every
in
any
of
the
repos
any
important
ones.
Anyway.
They
have.
They
have
a
change
log
where
you
can
see.
What's
new
I
can
also
just
go
to
releases
by
the
way.
A
That's
another
thing
you
can
do
it's
the
same
exact
content
I
actually
copy
that
in
our
autumn,
in
our
release
workflow,
I
actually
just
copy
that
text
into
the
into
this.
You
can
only
just
go
here
and
see
what's
new,
so
yeah,
that
is
that
that's
what's
different
with
the
curriculum,
like
you
know,
basically
a
lot
of
new
content,
but
also
some
changes
to
though
it's
the
actual
release
workflow
that
creates.
You
know
that
crease
that
puts
new
content
out
there.
A
A
So
that's
one
thing.
Second
thing
is
the
internet
platform
itself,
as
I
mentioned
that
we
used
to
do
that
in
conjunction
with
the
curriculum.
But
now
it's
separate
so
the
antidote
platform
version
0.40
has
been
released.
There
are
of
the
changes.
This
is
probably
the
most
significant
it's
hard
for
me
to
say
that,
because
it's
I've
been
like
working
on
this
release
for
a
few
months
and
and
it's
some
of
the
things
I
worked
on
were
a
while
ago.
So
it's
hard
to
remember
like
just
how
much
has
changed
in
this
release.
A
A
Basically,
so
you
had
to
make
the
changes
in
multiple
projects,
so
they
were
really
matter
which,
when
you
look
at
so
0
4
at
0,
the
two
big
things
are:
is
this
feature
here:
redesigned
endpoint
abstraction
and
where
is
it
somewhere
held
collections
feature
here?
We
go
collections
feature,
so
there
are
the
two
big
things,
there's
obviously
other
other
features
and
some
bug
fixes
here.
But
those
are
the
two
really
big
ones.
A
I'll
just
talk
about
these
very
briefly,
if
you,
if
you,
if
you've
been
in
the
project
for
a
little
bit,
you
might
remember
the
old
format
in
the
lesson
definition
files
where
you
have
to
say
like
there's
like
a
look
at
a
key
for
devices
and
then
the
middle
one
for
utilities
and
then
another
one
for
black
boxes.
That
was
I'll
just
admit
it.
That
was
really
bad.
That
was
a
bad
abstraction
and
it
was
just
really
tough
to
reason
about
not
to
mention
it.
A
It
didn't
italics
or
endpoints
that
weren't
and
network
devices,
and
frankly
it
didn't
even
account
for
networks
about
it,
doesn't
even
account
for
network
devices
that
weren't
supported
by
napalm,
which
you
know
may
possible
a
lot.
But
there
are
some
things
that
support
in
particular
cumulus.
So,
even
though
our
intentions
were
to
support
any
network
device,
architectural
choice
at
that
time,
actually
inadvertently
limited
us.
So
we,
what
we
did
was
we
just
said:
ok,
well,
I'm,
we're
throwing
that
out.
We're
throwing
that
model
out.
A
What
we're
doing
is
we're
redesigning
the
lesson
definitions
so
that
you
just
design
endpoints
generally,
there
is
no
you
there's
no
type
of
end
point
there
just
end
point:
that's
all
they
are
some
from
from
syringes
perspective.
The
you
know,
a
Linux
container
that
has
a
few
Python
libraries
installed
is
no
different
from
you
know.
A
container,
that's
got
a
four
gig
ram,
VM
running
it,
it
doesn't
really
matter,
and
that
is
true
not
only
for
the
way
that
you
know
the
way
that
basically,
those
images
are
represented,
but
also
how
they're
configured.
A
So
this
is
one
of
the
biggest
problems
with
getting
a
you
know.
An
image
like
cumulus
into
antidote
was
again
when
you
move
between
stages,
you
reconfigure
all
of
your
endpoints,
so
you
have,
like
you
know,
say
like
Juno's
configs
in
your
in
your
lesson
directory.
Well,
if
you
will
is
not
supported
by
napalm
and
so
that
entire
idea
of
you
know,
configuring
cumulus
between
stages,
just
wasn't
going
to
work.
A
We
just
didn't,
have
the
ability
to
do
that
I'm,
not
to
mention
there's
a
myriad
of
use
cases
that'll
that
it
kind
of
require
that
you
should
be
able
to
make
runtime
configuration
changes
to
anything
like
a
good
example
is
like
if
you,
if
you
have
just
like
a
basic
Linux
container,
and
you
want
to-
and
you
want
to-
and
you
want
to.
You
know,
teach
somebody
how
to
troubleshoot
a
script
and
so
in
between
you
know
in
between.
A
You
know
your
stages,
you
you,
you
do
something
to
that
script
or
you
maybe
change
a
dependency
or
any
number
of
things
you
just
want
to.
You
want
to
be
able
to
modify
you
want
to
be.
You
want
to
be
able
to
make
configuration,
changes
to
anything
and
your
lesson,
not
just
network
devices.
So
that
was
the
big
reason
why
we
redesigned
this.
Not
only
was
it
a
pretty
bad
abstraction
to
begin
with,
so
we
simplified
it,
but
we
also
made
it
so
that
you
can
apply
configuration
changes
to
any
endpoint
in
your
lesson.
A
Doesn't
matter
what
it
is
we
support
we
still
support
napalm.
So
there's
a
way
to
do
that.
If
you
go
to
the
docs
a
good
way
of
getting
a
snapshot
of
how
as
to
how
this
works
now
is
by
going
to
the
docs,
you
go
to
contributing
or
I'm.
Sorry,
if
you
go
to
an
ego
platform
curricula,
lessons
and
then
endpoint
configuration
you'll
see
we
have
a
number
of
configuration
options.
You'll
see
that
napalm
still
supported.
A
So
you
know,
honestly,
if
you
do
have
network
devices,
that's
a
very
useful
option
to
have
no
need
to
reinvent
that
wheel,
so
we'll
just
using
napalm
and
the
way
that
you
do.
That
is
you
specify
in
your
end
point
you
say,
configuration
type
napalm
and
then
the
driver
for
that
for
the
maple
and
driver
for
that
device,
so
the
digitals
device
you'll
say
napalm
Juno's.
If
it's
an
EOS
devices,
they
made
long
yo
s.
That
string
is
passed
directly
to
enable
so
we're
not
screwing
with
that.
A
However,
you
have
two
new
options
that
currently
or
that
didn't
exist
before,
and
that
is
Python
and
ansible
so
just
like
in,
if
you
have
napalm
selected
Surinder
will
expect
that
you
have
a
network
configuration
names
like
say:
if
you
have
a
network
device,
dqf
x1
dot,
txt,
that's
what
that
will
need
to
be
named.
Similarly,
if
you
have,
you
want
to
just
simply
write
a
Python
script,
then
you
say
like
me:
f
x1,
dot,
py
and
then
you
know.
A
A
If
you
have
an
answerable
playbook
that
makes
the
changes
that
you
want
to
make
you
just
put
it
in
the
directory
and
syringe
will
run
it
on
your
behalf
and
again,
the
reason
we
did
this
was
because
not
everything
runs,
not
everything
runs
napalm,
certainly
not
every
you
know
generally
not
every.
Let
me
just
have
a
regular
Linux
containers,
but
even
some
network
devices
well,
don't
aren't
supported
by
napalm.
So
we
just
wanted
a
lot
more
of
a
robust
capability
here.
So
definitely
dig
into
that.
A
The
other
thing
that
we
changed
in
the
endpoint
abstraction
is
the
way
that
endpoints
are
presented
to
the
front
end.
So
again,
the
old
abstraction,
if
you
wanted
to,
if
you
wanted
to
show
something
in
the
UI,
you
had
to
declare
it
as
a
network
as
a
device
or
a
utility.
You
had
to
put
it
in
the
right
bucket.
A
Basically,
if
you
didn't
want
it
to
show
up
at
all,
you
had
to
put
it
in
this,
like
weird
arcane,
hardly
documented
bucket
called
black
box,
which
was
a
cool
name
for
whatever
that
whole
mess
was
not
not
appropriate.
So
what
we
did
instead
was,
we
said,
okay
for
every
endpoint,
regardless
of
what
it
is.
You
need
to
declare
a
list
of
what
we
call
presentations
and
those
presentations
are
very
simple.
They
just
need
three
things
you
got,
you
got
the
name
just
so
just
name
it
like
it's.
A
A
CLI
I
call
it
that
sort
of
a
human
it's
over
to
you,
human,
readable
thing,
ports
or
port
port.
We
need
to
actually
use
to
access
this
and
then
type
so
we
have
currently
SSH
and
HTTP
supported.
Ssh
is
exactly
what
you've
seen
thus
far.
You
know
terminal
in
the
browser
all
the
way
to
the
end
point,
and
it
just
connects
that
status
quo.
However,
this
was
one
of
the
things.
A
One
of
the
reasons
we
built
the
future
was
because
we
actually
did
support
the
ability
to
show
some
sort
of
like
a
web
UI
if
your
endpoint
was
showing
them
a
web
UI
and
we
actually
kind
of
supported
that,
but
in
the
old
abstraction
it
was
horrendous
and
I
never
talked
about
it,
because
I
was
ashamed
of
I
thought
I
was
implemented.
So
this
was
another
thing:
I
tackled
him
in
this
change.
If
you,
if
you
have
say
you
know,
traxtorm
is
a
great
example.
A
Our
sacks
from
Catan
I'm,
actually
working
on
a
lesson
that
takes
advantage
of
this
Saxon
has
both
a
web
UI
and
a
CLI,
and
so
it
would
be
useful
to
show
both
so
on
one
tab.
You
have
the
terminal
with
the
with
the
you
know,
stacks
verbs
with
a
you
know:
bash
shell.
That's
that
stacks
norm
installed.
That's
that's
sort
of
status
quo,
but
in
addition,
you
can
have
another
tab
to
the
same
endpoint.
By
the
way,
it's
not
too
different,
influenced
and
same
exact
endpoint,
that's
showing
the
web
UI
in
an
iframe.
A
The
only
thing
you
have
to
do
in
order
to
make
that
work
is
by
declaring
it
as
an
HTTP
presentation,
and
this
is
a
little
bit
of
a
typo.
You
wouldn't
obviously
use
port
22
for
that
you've
used
4
for
3,
or
something
like
that.
So
this
is
pretty
cool.
I'm
excited
about
this
I
feel
a
lot
better
about
the
influent
abstraction
now
and
I'm
going
to
be
explicitly
writing
you
know
the
lessons
that
I
work
on
personally
I'll
be
incorporating
these
features.
That
way
people
have
sort
of
reference.
A
Examples
to
pull
from
the
second
big
feature
in
the
platform
is
collection,
and
this
one's
a
lot
easier
to
summer
to
talk
about
quickly.
We,
you
know
we
we've
always
wanted
to.
Let's
just
say
categorization
in
Energy
Lab
is,
has
been
and
will
probably
continue
to
be,
a
moving
target.
We
always
were
always
thinking
of
new
ways
to
categorize
content,
but
that's
mostly
from
like
a
technical
perspective.
So
like
right
now,
if
you
remember,
we've
got
like
fundamentals,
tools
and
workflows.
A
That's
certainly
useful
I'm,
not
we're
not
suggesting
that
we're
going
to
throw
that
out.
We
just
feel
like
we
want
to
add
to
that.
There's
more
detailed,
Lee
stuff
out
there,
but
from
a
non-technical
perspective,
there's
also
some
other
categorizations
that
are
useful
and
I'll.
Just
give
you
a
little
bit
of
history.
You
know
we
we,
as
in
you,
know,
I
putting
my
juniper
hat
on
for
a
second
here,
I
work
for
juniper.
A
Obviously
we're
going
to
we're.
Not
you
know,
the
intention
is
not
to
just
load
that
collection
full
of
Juniper
content,
it's
just
to
have
a
place
for
the
stuff,
that's
already
there,
and
similarly
we
have
other
folks
that
want
to
contribute
lessons
that
that
we
might
want
to
tribute
that
content
to
as
well.
For
instance,
you
know
the
network,
the
code
guys
and
twin
bridges,
which
is
a
requires
company.
You
know
they
create
tools
of
their
own
and
we
want.
A
We
want
to
provide
a
home
for
those
things,
and
so
what
we
did
we
note,
this
idea
of
collections
and
collections
are
pretty
simple.
It's
a
way
of
effectively,
first
off
creating
a
you
know,
kind
of
a
home
page
where
you
can
go
to
learn
more
about
some
sort
of
entity.
That's
involved
with
canary
labs,
now
I.
Imagine
that
sponsors
of
the
project
will
sort
of
implicitly
have
a
collection,
but
that
doesn't
mean
those
it's
not
a
one-to-one
relationship.
A
You
know
not
every
collection,
you
know
being
in
a
collection
doesn't
like
automatically
make
you
a
sponsor.
It's
just
it's
a
way
of
housing,
particular
piece
of
content
or,
at
the
very
least
just
linking
to
additional
resources.
So
that's
the
big
thing
for
us
right
now
is,
like
you
know,
there's
a
ton
of
content
on
pack
of
pushers
for
running
this
stuff.
A
If
you
go
to
like
network
to
code,
guys,
they've
got
all
time
to
content
on
their
site,
but
of
course
there
their
strength
is
in
actually
holding
classes
same
thing
with
Kirk
all
they're
a
little
bit
more
focused
on
Python,
and
you
know
we're
not.
You
know
any
lab
who's
not
invented
to
replace
all
of
the
existing
training.
It's
augment.
A
So
it
would
be
stupid
if
we
didn't
find
a
way
to
link
to
those
folks,
and
so
that's
well,
that's
what
we've
done
so
collections
are
it's
a
kind
of
a
new
categorization
tool,
that's
leaning
on
them,
sort
of
like
more
non-technical
side
of
things.
If
you
use
your
lessons
and
you
want
those
lessons
to
be
placed
in
your
collection,
you'll
see
that
the
two,
the
two
lessons
that
we
have
right
now-
that
that
use
juniper,
open
source
tools
like
Jason
a
P
and
PI
easy.
A
A
You
put
it
in
that
field
and
if
the
platform
will
do
all
the
work
of
populating
this
page
with
information,
so
pretty
excited
about
this,
this
is
this
is
a
way
for
us
to
basically
link
to
other
resources,
for
you
know,
sort
of
for
next
steps
and
I
think
it
will
help
also
get
folks
involved,
because
you
know
they.
You
know
a
very
good
example
of
been
able
to
say
guys
or
Kirk
where
you
know
they
say
hey.
You
know,
we've
got
all
these
folks
or
all
these
offerings
from
an
automation
perspective.
A
We
just
we
want
to
get
the
word
out
about
what
we're
doing,
and
so
what
they
might
do
is
for
the
for
the
tools
that
they
maintain.
They
may
give
like
a
sneak
preview
or
a
sort
of
a
101
level
lesson
and
they'll
build
that
into
energy
labs,
and
then
we
have
a
call
to
action
at
the
bottom.
That
says,
hey
check
these
guys
out
and
they'll
pull
into
the
collection
ID
for
follow
up,
but
that's
the
ideal
goal
in
my
mind.
C
D
A
Cool
and
and
just
I'll
put
the
cap
on
this,
just
like
everything
in
any
lab,
it's
all
stored
as
code.
In
fact,
the
collections
are
actually
in
the
repo
alongside
the
curriculum.
So
if
you
go
to
the
and
realize
curriculum
you
can
see,
the
collections
actually
has
its
own
directory.
We
have
a
bunch
of
POC
collections
in
here.
We
probably
should
delete
because
we
were
playing
with
it,
but
they're
not
live
anyway.
There's
a
field
in
the
collection
definition
just
like
in
lessons
for
tear
so
they're
all
at
PTR.
A
Right
now,
none
of
them
are
actually
in
production
except
for
the
floor.
I
just
showed
you
so
yeah
so
anyway.
Well
what
I'm
trying
to
get
out
there
is
that
they're
they're
all
stored
as
code,
and,
of
course
we
can.
We
can
change
anything
just
like
we
would
change
a
lesson.
So
if
you
have
any,
if
you
have
anything,
you
think
should
be
added
or
changed
it's
it's
all
totally
editable
as
if
it
was
a
weapon.
A
Going
back
to
my
blog
post
I'll
be
brief
here,
because,
because
this
I
mean
now
it's
pretty
simple
to
just
say
we
we've
migrated.
So
we
still
wear
it.
We
still
a
presence
in
Google
compute.
If
you've
been
around
the
project
for
a
little
bit,
you
know
that
we've
been
running
a
Google
compute
for
a
while.
A
The
core
speed
was
was
kind
of
may
can
wasn't
very
fast,
so
what
we
did
with
packet
was
we
selected
a
server
type
that
was
actually
double
the
course
speed
almost,
but
then
a
lot
less
ramped.
We
actually
don't
need
that
much
RAM
we're
aggressively
garbage
collecting
live
lessons
when
you
start
a
lesson
that
sort
of
starts
a
30
minute
countdown
if
you've
inactivity.
If
you
refresh
the
page
that
countdown
gets
restarted
and
there's
a
few
other
things
that
restart
the
countdown,
but
generally
after
30
minutes
of
inactivity,
your
lesson
gets
cleaned
up.
A
So
in
terms
of
like
RAM
to
run
stuff,
they
don't
actually
need
that
much
because
we
get
we
clean
stuff
up
pretty
regularly.
It's
all
automated
part
of
the
platform,
and
so
what
we
really
needed
was
core
speed,
and
so
we
selected
a
server
type
that
optimized
for
those
sort
of
to
nine
goals
and,
as
a
result,
the
the
the
per
node
price
was
actually
fifteen
cents
lower
than
the
big
fvm
you're
paying
for
before,
which
is
cool.
So
we
get
a
better
experience
and
it's
cheaper
and
it's
and
it's
bare
metal.
A
So
it's
not
VM.
So
nested
virtualization
the
the
penalty,
there's
always
a
penalty,
we'll
deal
with
that,
even
if
you're
using
hardware
acceleration,
which
we
were,
but
even
so
there's
a
penalty
and
we're
avoiding
that
now
and
I
will
tell
you
how
I'll
just
show
you
why
not
we're
gathering
telemetry
about
the
way
that
the
platform
is
performing
all
the
time
you
know
at
we.
A
Of
course,
we
don't
gather
any
personally
and
unemployed
and
personally
identifiable
information,
but
we
do
gather
like
generally
like
how
the
platform
is
working
and
I
think
this
is
actually
let
me
zoom
out,
because
I
think
this
is
too
far
forward
yes
hold
on.
Is
loading
pulling
all
the
data,
so
these
are
the
two
graphs
that
I
look
at
there's
a
few
graphs
I
need
to
add
the
new
drisana
instance
that
I
stood
up
I
have
yet
to
finish
configuring
it
for
myself.
A
A
A
A
You'll
see
the
little
time
are
a
little
faster,
maybe
not
quite
having
the
kind
of
clothes
and,
of
course
this
is
totally
I
need
to
do
some
filtering
here,
because
obviously
there
are
lessons
in
here
that
don't
sound
like
Network
Devices
and
those
are
going
to
be
fast
kind
of
no
matter.
What
so
I
got
to
do
some
filtering
here,
but
if
you
just
kind
of
screen
writing
a
little
bit,
you
can
see
that
the
less
than
load
times
a
little
faster
look.
A
More
importantly,
in
my
mind,
is
there
more
consistent,
so
there's
there's
a
lot
less
variability
in
the
load
times,
so
what
I'm
hoping
to
do
is
compile
some
reports
once
we've
gathered
enough
data.
So,
what's
a
weekly
been
a
few
days,
I
want
to
get
more
data
than
this,
but
I'm
pretty
sure
we'll
be
able
to
see
at
least
some
sort
of
performance
improvement.
A
Just
visually
I
can
kind
of
see
that
it's
a
little
better
but
I
want
to
I
want
to
get
some
only
get
more
data
first
off
and
then
I
also
want
to
clean
it
up
so
that
it
there's
not
so
much
noise
but
load
times.
You
know,
that's
been
one
of
the
things
that
we've
been
trying
to
figure
out
how
to
optimize
for,
and
this
and
I
think
a
really
good
step
in
that
direction.
So.
C
A
Yeah
and
same
by
the
way,
the
syringe
redesign
that
I've
talked
about
on
getting
off
another
example
of
that
like
we're,
you
know,
there's
not
a
there's,
not
a
big
constraint
on
usage
right
now,
but
especially
as
as
I'm
waiting
for
that
hacker
names
moment.
Basically,
so
yeah
we're
trying
to
do
a
lot
of
we're
trying
to
do
a
lot
of
things.
While
we
have
the
time
and
while
we
have
the
bandwidth
to
tackle
them
and
moving
the
packet
honestly
was
was
a
lot
easier
than
I
thought.
A
It
would
be
I
just
had
the
you
know,
moving
away
from
like
Google
or
any
public
cloud.
Really
they
want
you
to
use
all
their
services
packets
way
simpler.
They
don't
even
have
really
services
think
they
have
servers,
that's
about
it.
They
have
a
few
other
things
on
top,
but
it's
nothing
like
a
public
cloud
where
you
can
Hamlet.
A
You
know
the
TCP
load,
balancer
and
all
of
that
stuff,
so
my
relating
to
packet
meant
sort
of
making
the
infrastructure
a
little
bit
more
heterogenous
or
using
CloudFlare
upfront
for
load
balancing
and
a
few
other
things
like
that.
Instead
of
you
know
everything
under
one
under
one
roof,
but
yeah
generally,
that's
the
reason.
We're
doing
a
lot
of
this
stuff
is
is
a
to
improve
the
experience
now
and
also
be
to
you
know,
to
effectively
pay
down
the
technical
debt.
While
we
don't
have
that
much
interest
accrued.
A
And
that's
it,
that's
really
it
please,
you
know,
feel
free
to
leave
the
rest
of
the
blog
posts.
You
know
like
I,
said:
I
covered
everything
that
I
talked
about
in
the
post,
but
some
of
the
details
might
be
useful
to
you.
Change
logs
have
all
the
details
of
like
like
what
PR
is
introducing,
which
features.
A
If
you
want
to
look
at
the
code,
if
you
want
to
look
at
the
comments,
all
of
that
stuff's
there
and
yeah
and
I'll,
be
writing
more
blogs
or
doing
a
little
bit
of
travel
actually
over
the
next
three
weeks,
but
I
do
have
some
gaps
so
hopefully
I'll
be
able
to
release
some
blog
posts
in
the
next
few
weeks.
That
covers
each
of
these
things
in
much
more
detail.
Pretty
pictures
thanks.