Add a meeting Rate this page

A

Okay, so we're on we're on slide. Six I think the agenda speaks for itself.

A

Again, we have proposals, we have sponsor requests and we have a backlog today, we'll be hearing from tik be, but, as somebody associated with poor sex I feel no shame in requesting a second sponsor I believe that ken has offered to be a sponsor for the project. Having spoken to the team and I, don't know if Brian controllers on it was able to speak to them, but I think you may be able to act on their behalf, otherwise tell them to go, find someone else. Hey.

B

Alexis yeah I intend to do that. Their availability is from the compile time zone. So in my ability, as wrote that in the morning right now so we're trying to find the time to talk. Okay,.

A

Thank you, Brian I appreciate that cool and open metrics and panel bar okay. So that means that we go straight into the TI KD presentation.

A

Do we have ed and Kevin available Alexis? This is Kevin. How are you I'm very well? Thank you. As.

C

I was saying on Twitter you.

A

Know open source is a global phenomenon. I saw you talking.

D

About that, so indeed yeah hopefully represent a part of that globe. With this presentation well,.

A

Good luck! All right.

D

So I guess I'll share my screen and launch this slide for everyone and go for it. Yeah make sure everyone can see me share screen all right. Hopefully everyone can see. This slide looks awesome, awesome, okay, great! So, once again my name is Kevin. I am from the company pink app I'm, their general manager here in North, America and I, along with our co-founder and CTO Ed Wong will be presenting Tai KB to everyone. Thank you again for the opportunity.

D

Thank you for not watching the World Cup and listening to us at this very moment very excited to be presenting this open source distributed transactional key value store for everyone to consider for CN CF and a quick here. Let me see alright, so a quick agenda for today's presentation.

D

I will go through a history, and a community update for ty k be a fairly detailed walkthrough of the technical and architectural aspect of ty kV, a use case with ulema, which is one of the largest a food delivery platform in China right now, serving more than 260 million users and they're using Thai KB right now to serve about 80% of their employ duction traffic.

D

If you have time, I will also do a quick demo on my laptop to give you a little bit of a feel of how to spin up a Thai KB cluster on your laptop and when we have time happy to take any questions from everyone on the call. So a quick history about pink app, it was founded in April of 2015 by three infrastructure.

D

Engineers were working in some of the largest Internet companies in China, like Nettie's and JD calm, and was, of course, one of them and we set out to build a Taibbi platform tie for your curiosity. It just stands for titanium. There are several components to the Chi DB platform. One is tidy B itself, which is actually a stateless sequel layer. That is my sequel compatible. The focus of today's presentation is ty kV, which is a distributed, transactional key value, storage layer. We also built something called Chi spark recently.

D

That is a spark plug in that also talks directly to ty KB, to help a lot of our users process, more complex, analytical queries. Last but not least, we also have a project called placement driver, which is a cluster that does the metadata storage layer that communicates with Ty KB, to do scheduling, auto, balancing and also timestamp allocation ty KB. The project was open source a little over two years ago.

D

On April 2016 is current, version is 2.0, it's license is Apache 2.0, and here is the link for you to check out the repo in terms of the community progress. So tidy B as a whole is actually one of probably the most popular active open-source database project out there. It has more than thirteen thousand five hundred stars. Tyke ad itself has more than thirty three hundred stars with 70-plus contributors and roughly around three thousand commits right now.

D

We also have the benefit of enduring contribution from a lot of outside institutional contributors from other companies like Samsung, like mobike, which is one of the largest bike sharing platforms out there, as well as folks, like total calm, which is one of the largest tech companies in China, with a super popular news, aggregator app the whole company's value around twenty billion dollars right now, as well as two public cloud vendors like Tencent cloud and you cloud, and the big pain point that we wanted to address to build high kv, is to have a open source, distributed storage layer that can unify a lot of the disparate data that has been stored right now in multiple different kinds of database solutions, but have a layer that supports strong consistency that really supports, distribute transaction with asset compliance that can be easily scaled horizontally in either direction and, of course, has a cloud native architecture.

D

That was, of course, a lot of what drew people's interest with a Google spanner project. That is also where we got our original inspiration from for ty KB as well, but unfortunately spanner is an open source and isn't so accessible, and our vision for ty kV is to build a building block for other cool amazing, powerful systems to be built on top of it. So far we built IDB anti-spark ourselves.

D

Hotel has built their own metadata service on top of their s3 implementation, and the ulema, which I will go into in more detail, actually build their own Redis proxy on top of ty Kb.

D

So now, I will do a dive into the technical architecture of tech-heavy, as I mentioned, Thai KB currently lives within the whole tidy be platform among many, a few different other components, but the focus of today's presentation will be in this little red part where all the tight KB clusters are where all the data is actually stored and persisted and communicates with the placement driver.

D

Cluster here is a layout of the Thai KB architecture, so ty KB, the component uses G RPC to communicate with the placement driver, as well as any clients that can be built on top of it. It exposes two kinds of API one as a raw key value. Api one is a coprocessor api that facilitates pushdown computation. It uses the RAF consensus protocol to provide data, replication and high availability and underneath each Thai kV instance, and you can essentially imagine each instance as one single machine.

D

We also have a rocks TV instance, where we leverage that community for the as our storage engine for a Thai KB- and here are some of the technical highlights of Thai kV. As I mentioned, it does scheduling and auto balancing. We also have a multi raft implementation, because each Thai, kV node, has several of actually oftentimes many different raft groups that are replicated across different ikv nodes. So each tied, kV node has multiple raft groups that it has to facilitate the communication between different ikv nodes.

D

We also have a dynamic range based partitioning feature that allows these raft group to be split merged or the leaders can be automatically transferred in order to remove and resolve hotspots. The way we implement as the transaction is through a phase commit with optimistic luck and high kb is written entirely in rust, which is a relatively new systems level, language that is getting a lot of traction and adoption, and the nice thing about rust, as many of you may know, is that it does not have GC stop time or a lot of run.

D

Time cost in fact, I. Think ty. Kv is one of the largest rust in production project out there. Aside from, of course, Firefox here is one example where sequel can be realized, on top of ty k, be using ty DB, which is what we built internally well with the community as well. So the way it works with habibi is that Taibbi actually has several layers that we build ourselves with a my sequel, compatible layer, a parser, a cost-based optimizer and a coprocessor distrib.

D

The executor API that talks directly to Ty kV nodes and each of these little color blocks are basically a RAF groups that are in the same group and they are evenly distributed across multiple tyka v knows and replicated for high availability and the way we map a relational table to a key value pair is essentially, we have a encoding system that map's the key and the value or actually the IDs and the indexes of each row of data into key value storage pairs that can be essentially imagined or visualized as a giant sorted map that are broken down again into multiple, smaller chunks that are replicated using a raft and all the keys here are sorted according to the array order, and we did that intentionally in order to support useful operations like scan- and this is a pretty important difference, comparing ty KB to say some of the other other similar projects that you go by, for example, which I believe uses hash to generate these keys, and they for cannot support things like skin and sorry about the mix-up with graphic.

D

But here is a visual representation of how the coprocessor works. So what essentially happens is that when Taibbi receives a sequel query, it will go through the parser to break down each of the query into different physical plans and partial and each of the the plan. The partial aspect of the plan are actually pushed down into multiple high Kapinos simultaneously, where all the computation is actually done inside high kb notes, at the same time, to compute partial results for a particular query. These partial results are again. We turned back to tidy b and tidy B.

D

Does the final reassembling of all the partial results that can be sent back to the client, and this is an implementation that we work on a lot to be able to take advantage of the distribute nature of tie AKB and all the computing power that it has access to inquiry to speed up more and more queries and in one of our future roadmap plans. We actually plan to support more built-in functions that we can push down into ty kv nodes as well.

D

Here's another example of how ty kV is being used. So far, I alluded to this a little earlier, which is how code al.com uses ty kv. They have their own s3 implementation. They have a bunch of s3 buckets with a lot of blob storage and they are also using ty KB as their meta data storage right now for their a production mode.

D

And here is the latest benchmark the? Why CSB benchmark that we did just last month- and here is the environment and the hardware that we use to do this benchmark and you can see the insert EPS results as well as the read QPS result here. One thing to note is that this is standard default, 3ty, kV, node deployment, and you know, of course, in actual in production environment. Most of our users deploy way more than three tidy nodes to store more data to increase their capacity.

D

So you know this kind of this result will be much better and I think the student will be much higher in even in production environment, but this is the the benchmark that we did last month for tied kV oops here is a quick overview, comparison between high kV and some of the other popular, no sequel databases out there. Of course, every single database tries to solve different problems in different ways using different technology.

D

So not everything can be compared in a completely Apple to Apple sort of a way, but tie kV is original and still the current goal is to first and foremost, support distributed transaction. That has strong consistency, and that is the sort of that. The first goal and the first level priority that high kV looks to support which is different from some of the other know. Sequel databases out there here is a visual overview of one of the features that I mentioned before, which is dynamic. Splitting and merging.

D

As many of you know, raft groups and RAF regions can get quite big and, as a rat Fusion gets big, it could form a lot of hotspots and provide proof and produce performance issues, and one thing that high kV can do on its own is that if a region say here in region a gets too big and by big?

D

Currently, we are defaulted to mean 96 megabyte, of course, that can be configured depending on your usage, but if it gets larger than 96 megabyte, we will automatically split that region into two region and put them in different hi KP nodes, and the reverse is true as well.

D

If a region is too small and that is currently defined, 10 megabyte, then we will look for the closest adjacent region and then merge those two together into one larger region to you know improve the performance of that region, and here is another visual illustration of a core feature which is automatic hotspot removal, one of the best use cases for ty, kV and hi BB as a whole, is if your access mode doesn't have hotspots or wants to avoid hotspots.

D

Ty kV is a great solution for that and the way that's being done is, if you know several regions where the leader here denoted in blue, is in one particular machine. Then all the workload is going to all the leaders.

D

While the followers are not, you know doing a whole lot, and if this is starting to form a hot spot, then the system will facilitate an automatic graph leader transfer, where we will do a logical leader transfer here in region, B to move the leader from the first machine to the second machine and there's no actual data movement. Here, it's just a transfer of leader within the RAF consensus protocol.

D

Then we will have the work clothes put onto two different machines and that's the hotspot is removed and to go a little bit over our cloud native integration and progress like I mentioned, we've always imagined and built a KB to be having a cognitive architecture that works closely with kubernetes to be integrated in all kinds of cloud deployment scenarios.

D

Currently ty kV is integrated with Henson cloud and you cloud and most recently we are also got on JD dot-coms cloud provider or cloud solution and of course, in the future, we look to integrate with all the major cloud vendors you know all over the world and as far as cloud native synergy is concerned, with other components. Currently, we have a darker compost, deployment for testing and development and local machine, which will be part of my demo.

D

We have a tool called a DB operators that works closely with kubernetes to help deploy tidy B in all different public or private cloud scenarios, and we will actually soon be open sourcing. This tidy the operator tool as well. We of course use Prometheus and G RPC in our standard deployment.

D

We, our team, is actually one of the largest maintainer of the rust implementation for both Prometheus and G RPC, and we also use a lot of etcd and is a active contributor to e GC D, because we have really been leveraging etcd since day, one when we started building ty kV, because it had a very mature RAF implementation and also very rigorous testing regimen that we really leveraged and we didn't fork it completely, because we wrote ty, KB and rust. So we kind of have our own rus implementation of etcd.

D

We are active contributors of the EPC d community. We do a lot of bug fixes and we are also leading the charge in forming features. New features like the raft learner, a quick overview of ty kvx usage. Currently there's about 200 companies give or take that are using ty KB in production right now.

D

A lot of them are using ty kV in combination with other components like tidy B anti-spark, but quite a few companies are using pi kv by itself, and one of those companies that I want to talk about is lemma, which, like I mentioned, is a food delivery platform with 260 million users. So it's bigger than a lot of the perhaps more well-known food delivery platforms that we hear here in North, America and Europe all combined. It was recently acquired by Alibaba for 95 billion, u.s. dollars and problem or the pain point that they were facing.

D

Is they had a lot of data in key value formats and they were using a hodgepodge of different solutions like Cassandra and Redis, and they were looking for a solution that can really unify all these different data sources into one, and they found that ia, KB and deployed Heike be asked this unifying storage layer, which currently is a serving and affecting about 80% of the entire platforms traffic ty KB is currently holding more than 25 terabytes of data spread across four different data.

D

Centers for luma and, what's really interesting, is that little mud build their own Redis layer on top of high KB because they wanted to continue using Redis a lot of the application developers, love using Redis. So that's what they did to make titanium work for them and, if you're interested in digging deeper into how they used high kV. We recently published a use case story written by a little most engineers that you can look at via this link.

D

Alright, so I've done a lot of talking a blob of lying, and if we have time, I will do a quick demo of how to spin up a tidy B cluster on your laptop and the context of this demo is to show you number one: how easy it is to deploy pidb right now, I already downloaded or get cloned a high degree cluster repo on my laptop, so I'm be pulling a right now using docker compose and, as you can see, prometheus is installed by default and so are three standard.

D

Ty KB nodes right here and what I will do now is spin up a my sequel cluster as well as a spark cluster, so that you can see how ty KB can be the underneath storage layer to facilitate both components talking to each other and reading from the same data source, but before I do that I want to show you real, quick monitoring mechanisms. So each of these deployment has a core fauna. Implementation, defaulted support, 3000 and if you log in using just admin admin again, this is just for testing and development purposes.

D

You can monitor your entire clusters. You know metrics and current status. If you go into tidy, be clustered ikb, you can look at the store size, the available side, and things like that. So there's a bunch of stuff that you can play with inside the Garifuna implementation and one more tool which we build in-house is something called pidb vision, and this is defaulted to port 80 10 and here you have a cool little data, visualization tool that, as is ring, and each of the partial ring, is basically one Tyco AV node.

D

If you look a little bit deeper in there, you see a bunch of empty blocks. These are just empty storage spaces, the dark, green, our wrath leaders and the dark gray are wrapped followers, and you can essentially visualize a raft as it goes through the entire tidy beat or tie KB deployment. So this is how that works now back to terminal the demo. What I would do is launch a my sequel instance.

D

So, in the interest of time, I will just do a lot of copy and pasting of commands so launch my sequel and, as you can see, this is kaity be compatible with my sequel and I will also launch a spark instance. This will take a little while, so it will let this run. Let's go back to my sequel and I'll. Show you what is in here so this.

D

So we have a few databases and we'll actually use this one called TPC h20 one for the demo. It just has a bunch of sample data in there, so TVs h-01 and let's see what's in this database, so it just has a bunch of different tables in here. One of them is carnation orders things like that. So, let's see what is in nation all right, just what's up countries with some random information in here and right now we have our spark plug and ready. So I am going to input.

D

A couple of commands to launch high spark which, like I mentioned, is a spark plug in that works directly on top of pi kb as well. So these are the two standard. Ty spark commands, and last one we will hide this instance to the same database called capezios or one, so they should be talking to the same data source and, let's just see if that is the case, we'll use sequel against Park sequel, select from nation.

D

We have the exact same table of country information as the one that we saw on my sequel site. So let's make some edits to this table since Belgium has such an epic World Cup match yesterday. Were it advanced to the next round. We should probably add Belgium to this list of countries.

D

So let's insert Belgium into this table, and if you see that we have Belgium on the bottom right here, a new new member of this country list and if we do the same command, you immediately the change being made and visible on the tie sparks side as well. So you can easily imagine where multiple updates and changes are being made on the my Seco side and the tie spark side can immediately do queries and analytical process sing on the spark side, all being supported and stored inside ty KB.

D

Alright. So now, I'm gonna go back to my slides.

D

And, of course, the goal of our presentation is that we would love to have CN CF CF, except hi KB, as either an incubation or sandbox level project, and with this acceptance and to be part of CN CF, we're looking to build really not just a bigger community, but also something that is more vendor-neutral that can help us build this project, with better governance, with better structure and ultimately, more contribution to build very useful and important components.

D

That really, is you know, beyond the strength of the current community right now, we would love to see more language support right now. We only have a go client for Taibbi in a java client for tai spark. One of all community members has already started building an open-source Redis proxy. He caught it tightest, so you can check out his repo here, but of course it's still very much a work in progress.

D

We wanted to support column, family structure as well, so there's a lot of things that we will love tidy tight, K beads you have, and with cnc of support, I'm sure we'll be able to accomplish that. So again, thank you for your time, we'll love for you to be our TOC, sponsor and, of course, reach out to me and EDD anytime. If you have any questions and we will actually be praying for the technical proposals right now and we will share that with everyone, hopefully within the next week. So that's about my presentation, I.

E

Had one quick question Kevin on the PR, it has sandbox slash incubation, which one was it going incubation or sandbox? Oh that's just depends on the sponsor, I guess or yeah.

D

What do you.

E

Feel, like is blocking you from one or the other, I guess I.

D

Mean I think I'm sort of leaving this up to the TOC to give us what you think will be the best I guess level of entry for the current status of the project. You know, given the number of adoption that we've seen so far for its high kV I, think it probably would work for incubation. But then again, you know I'm not too familiar with a different kind of criteria and what goes into these considerations so we're being in open and receptive to your opinions, about which level is most appropriate.

B

Hey Karen, this is Brian sue. One question: have you done Jepson run on ty DV or ty kV I, just gooing around it looks like you, you done at least some experimentation, but does look like Kyle's had the opportunity to do a full run? Yes,.

D

We've done our own Jepsen test, but we haven't done it with Kyle going through his process. Specifically fitness yeah. You.

B

Might.

D

Is.

B

That something that you.

D

That you're looking to do um that's definitely something that we're looking to do and I guess we just haven't quite got around to it. Stephanie says that you know requires some resources from our side as well, but we would definitely love to you know. Go through that process as well. You know, since our prohm process probably gets us somewhere along that way, but I think having him do it with us is probably you know, will definitely open it doing that yeah.

B

I mean I would really encourage you to do that. I know it's it's time-consuming and it's expensive, both in terms of resources and potentially monetarily, but I do think it's really worth doing, because if Jebsen has I mean, as you know, Jepsen has become the gold standard for actually did allowing people to understand what the true consistency guarantees are of these furnished projects.

B

But this is honestly, very impressive and it looks like you've got a lot of production use, so I think it's I mean really exciting project I'm, actually a bit embarrassed that I've not heard of titanium our.

D

Fault but we're working on fixing that right now well.

B

And I I think like getting a Jepsen run, is a good example of something that would get you more visibility in terms of how it's in terms of what it can go to, because this is certainly for us and for a lot of people.

B

This is a really interesting I mean we've got all the same problems that you're seeing and the the GC pauses are just a deal breaker for me for any so that the and I'm- obviously maybe not in this forum, but aside, very curious with your experiences with Ross we're having a lot of really good experiences with rust. And it's looking like a very interesting trajectory, so I'd love to get your take on that as well. But I would be happy to help that to help facilitate you, however, I can from so okay.

D

Sponsor what have you that would be awesome? Thank you. So much Brian and yeah we'll definitely put that higher on our radar. Yeah. That's great.

B

Really impressive presentation. Thank you. Thank you.

D

Any other questions that we can help you with in a moment so.

F

How tightly coupled our tie, KB and tie DV so you're put you're proposing to donate just tie kV, that's correct! That's.

D

Correct and as far as being tightly coupled concerned, so tidy B itself is just a stateless sequel. Layer right and KB is the layer where all the data is stored and persisted, and we've always designed it that way, so the two part can be separated and then used in different ways.

D

So, right now, of course, given that high K V hasn't been so much on its own, the current usage very much is connected to tidy B and you know tie spark depending on what the user is for, but it can be easily adapted like what the autumn of folks are doing, with the Redis taxi to be used as a building block for really where you see fit right on top of it. So what.

F

You see is a an ecosystem of kind of domain-specific, where use case specific storage layers on top of K KITV yeah.

D

Baby.

F

Like the reticence, equal and potentially others in the future, absolutely.

D

Yeah, hey.

B

Chris, my apologies I got a drop.

C

No worries appreciate you Brian.

A

I think the last question: can you repeat that please mr. Brian Graham.

F

Yeah I was asking about the weather they see usage as having an ecosystem of domain-specific storage layers or adapters built on top, like Redis sequel and so on, as opposed to direct use of the KB store by applications. I guess.

B

Maybe.

F

We might see both, but it looks like right now, it's more layers on top, like spark and in sequel and so on. Well, what about whether spark is a layer or an application itself, but okay.

A

So I I had.

F

A question about the the region breakup as well yeah when splitting a region death in forces transactions to go to a two-phase commit. Is that all happen? Is that correct.

D

In terms of splitting the region, so the one I was talking about was more for removing kind of a large region for the purpose of performance issue. So.

F

What things I guess would be put in the say in together in the same region.

D

In terms of so this, that would just be data, that's coming in or kind of encoded from the relational side to Cavey store and then they're broken up into different region, mostly by their size, so kind of like in this situation, where you can see different tables going to the key value pairs and then broken down into different regions, and then these regions, I guess if it gets too big, could be split up. Just for you know, performance purposes or hotspot for me are reasonable. Okay,.

D

Yeah I'm, sorry, maybe I should I can follow up with you in terms of specifically, are you talking about the transaction kind of getting disrupted by region, split, I,.

F

Was just trying to understand what some of the automation was doing in terms of how reaches restructure? What's in the same rap group vs. different rap groups right.

A

And.

F

What the some of the performance trade-offs were with two phase commit purses the same rap group gotcha.

D

Gotcha yeah I mean in terms of region splitting I think the main consideration is the size of the region, but also the amount of traffic that this region. Assuming that this region is a leader replica for that particular rap group, then it could get split up into a difference. Machine is to remove hotspot. That's I would be like one scenario: I think where that will get automatically facilitated.

G

Okay,.

A

Two questions: yeah, one is I, think pretty easy, which is: could you just clarify for everybody exactly what the relationship is with that CD in the past present and future just assist lobby understand and and two is just for just for QA really interested in you know if people have issues with take a B, what is the number one thing they complain about, I see bad show so.

D

First question: first, so for eg, CD right so, like I mentioned, we have been leveraging etcd since they won because of this rap implementation, and also you know, we really leverage its testing rigor to use it to test our own ty KB system. We actually use etcd embedded in our placement driver implementations. The placement I per cluster directly, but I for its high kv. We because we use rap, we use rust to code it.

D

We didn't form a PCB completely but kind of made our own rust implementation of etcd in ty kv, so that is kind of the past and the current and for the future. We are very involved in the different kind of EEP PCD kind of roadmap. That's going forward. One of the things that I mentioned is the raffle earner feature that we are really looking to implement for the next evolution of Thai KB, because one of the I guess drawbacks, and this actually goes into.

D

Your second question is because Thai KB is, you know by nature a key value store. It doesn't quite support, complex analytical queries and the speed and the performance that say, HBase potentially would or any other column family database would, and that is sort of the inherent structural limitation that high KB has in that implementation. So there is a limit, for example, to how fast our tight SPARC implementation could really go.

B

If.

D

It sits on top of what high kV is now and you can see that as being one of the not so much complaints per se, but least consideration or limitations that people use Thai KB for when it comes to analytical processing. But with the raft learner feature being more mature and being implemented, we can see that as a really good solution to support faster analytical queries to be able to process on topic, ikb.

D

What's the presentation, thank you very much. Thank you yeah once again, please reach out to us. If you have any undo follow-ups and you will circulate our proposal very soon,.

C

Kevin just a quick question which percentage of the maintainer czar outside of pink cap right now. Is it just primarily pink cap driven or you have maintained, errs from from other companies right.

D

Now I think in terms of maintainer x' is probably mostly from pink cap in terms of contributors. You know our own high cable team isn't any it's probably like less than 15 people and we accept a contributor. So, okay.

C

Cool thanks just.

A

Just one request: if you go ahead and do the incubation route probably start sounding out potential interviewees in your production user base, because I think the more we hear from them, the better it is for streamlining the DD process that it would you okay, I, have you got a jump Chris? Could you Shepherd the rest of the call? Please I'm, really sorry yeah.

C

No we're we're pretty much closing out so many other questions for for Kevin before he disappears.

C

All right, thank you, cool! Thank you, everyone inori's! So not too many updates. um You know just go to slide. 37, it's a pointer to the working groups. 30! It's the project review 39 is just a reminder that we have a few events upcoming for this year. At least we have Shanghai in Seattle. If you are interested in submitting a talk to Shanghai, the CFP closes at the end of this week, I think it's on the 7.

C

So please get your talks in you have a little bit more time for Seattle other than that slide 14. Our next meeting will be hearing from the Falco project from system on July 17th, and that's about it any other questions before we close it.

G

It closes it on the 6th, so.

C

This.

G

Is just the the last big shout out, please consider submitting a talk or multiple talks for cube con Shanghai, which is November, 14th and 15th this week is your last time to do it. You can write it out as you're looking at the fireworks tomorrow if you're in the US, and please encourage the folks in your organization to submit as well thanks.

C

Yeah cool I linked to CFP in the chat, any other questions from folks.

C

You all right, thanks all and we'll see you in a couple weeks, take care.

E

You too thanks.

E

You.
youtube image
From YouTube: CNCF TOC Meeting - 2018-07-03

Description

Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io

Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects