Add a meeting Rate this page

A

Good morning, good.

B

Morning morning,.

C

All right, Karthikeya all ready to go yep all ready, cool I. Think that will probably started about 803. Do you want to try sharing your screen off just get that going.

B

Yeah give me one second.

B

Okay,.

D

Yeah.

A

Okay, are you guys able to hear yeah.

B

I can see all right, good, they're, good yeah. Thank you all right, perfect. Thank you. I'll.

A

Just leave it like that, is that good, or do you want me to share it a little later, I'm sure that's.

C

Fine just leave it as this one ah perfect. Thank.

E

Hey Clint, hey.

C

Good morning,.

C

Good morning, everybody will give it just a couple minutes to get going to get some more folks to join.

C

All right, just one more minute, we'll get kicked off.

C

All right, I'll see we all have a normal load of people, but I think we should still get going anyways here good morning. Everybody thanks for joining we've got a schedule within the Google Drive. As usual, it's packed with a couple things. We're gonna have a presentation from from you, gabbai so cartax on the line here to talk about a scale out database that they've been creating I, think it's pretty cool stuff and then we're also have a spot to talk about the sessions that cute Cana will follow up discussion before getting into that explanted.

C

A general note I think that Camille has been reaching out to some folks on the SWT. Please do spend some time with Camille to give her some feedback on the fwg and what it's doing and what it should do or what you think it should do. This is all part of you know, making sure that as a TLC makes, decisions on you know we're giving charters to the storage working groups that they're informed of you know based on the perspectives of the people in the herps.

C

So if you could take some time and gives Camille some feedback that she's reached out to you, I think that we'd all appreciate that. So with that, let me hand it over to Karthik and we'll get it going talking about you gabite all.

A

Right thanks a lot: hey guys, I'm Karthik, going to like going to talk to you about gigabyte, it's a transactional, high performance database for planet-scale application and we'll dive right into what that is in detail, a real, quick intro about ourselves like three of us. The founders started this like it's canon, Karthik and myself. We started the project I'm one of the founders and the CTO here and all three of us, including nine others, worked at at Facebook on a variety of different applications in production.

A

We worked on both Cassandra and HBase in order to put it in production for use cases such as our messaging inbox, messaging, search time, series, spam detection, so on and so forth and yeah. Let's jump right in so real quick thing about the problem we're trying to solve. We saw this pattern repeated quite often at and having been in the open-source community with HBase way back.

A

We have seen that a lot of companies were trying to repeat this at the web to daughter like text company sector, but now this pattern is becoming even more common in the enterprise, especially with the advent of the public cloud. So how do people build planet-scale apps? It's like pretty clear that docker with kubernetes as the orchestration is the favorite choice for people to put stateless applications in and that's pretty much going into production and becoming mainstream. But when it comes to data, that's when the challenge begins.

A

So today's way of doing a data architecture is to have a sequel, master and slave, whether it's sharded or a single node scale-up solution. They have a sequel, master and slave and they have one or more no sequel solutions, because there's certainly advantages provided by no sequel databases that really help and the minute you have put your data across multiple data stores. It becomes very expensive to recompose the data, so people put the data that they need to serve to the end user into a cash like Redis.

A

So immediately with this sort of an architectural setup like, even if it is containerized, the issue then becomes you need to figure out which subset of the data goes into a transactional database like reckon SQL database, which subsets and what types of access patterns are ideal for which of the know, sequel, databases and which of the subset of data is being accessed by the user and therefore has to stay in a cache like Redis and because multi-region is becoming like the norm, and a lot of applications want to keep their data closer to the user for low latency access.

A

You need to figure out how to replicate it at pretty much every level right and if there is a failure and this sort of a system that's put together and it's like the blueprint is similar, but the exact implementation varies. Maybe the choice of technology varies a little bit here and there, but inevitably, if there's a failure, it takes a long time to figure out what went wrong right. So as you go by so, if the question we get asked is like suppose, you go to a public cloud like AWS.

A

So let's take the AWS example. How does it change this picture? Well, it makes a little easier for sure, but not a whole lot, because you replace the Redis set of machines with ElastiCache that Amazon or a cloud provider will manage for you. The sequel is replaced with something like an aurora or an RDS, and the no sequel tier is replaced with DynamoDB so so effectively. The architecture is still pretty much predominantly the same, so at yoga pipe. We try to go into.

A

Why is it not possible to converge all the three, and this is based on a lot of work we did at Facebook and with a lot of other work, we are done with scene done as history teams with the projects like Tao. So what really is the characteristic of the databases that makes it like makes an app require multiple of them right?

A

So if we split it into three core requirements like pillars that a database should offer, you can think of it as SQL databases, including our Ora, offer you high performance and transactionality, but not planet scale, because it's difficult to get your data distributed and scaled out. Add machines as you want, like all of those are manual, no sequel. Databases like MongoDB on the open source side or a variety of others, and like that's just an example and Azure cosmos DB, which is a multi model.

A

No sequel database through Microsoft both offer high performance and planet-scale, but don't offer transactions when you need them like I am talking about transactions in the in both a single row and multi row, so some of that is offered. Some of that is not on the other side, like the other tax that Google spanner took, was to go after planet-scale and transactional workloads.

A

But it's not ideal for high performance, because you're subject to atomic clock like effectively the atomic clock latency for streaming type of workloads where you don't really need it, and so, as you, the byte we're trying to bring all the three pieces together. So it's gotta be high performance where you can serve it with low latency and it can just be a serving here. It's got to be transactional when you need it for the subset of applications that are subset of workloads that need transaction and planet-scale okay.

A

So those are our design goals, transactional, high performance, planet-scale and, of course, cloud major, so really quickly. On the transactional side we wanted to have. We wanted the core data fabric to have distributed as a transaction support for both single row and multi row acid and with a document based storage engine core, but that can be exposed using a variety of different API that people are used to on the performance side. We wanted it to be really low latency, so, ideally for a majority of the workloads.

A

People should not need to deploy a cache in front of this system and it should be able to accommodate high throughput, build it with planet-scale in mind, so you're able to globally distribute data as well as offer tunable reads so that people in remote data centers can read from their nearest data center with some semblance of consistency and finally, on the cloud native site.

A

The obvious ones are, of course, being highly scalable and highly resilient or add nodes when you need to either expand your storage footprint or you need more serving capacity or cache capacity and be highly resilient, which is tolerate node failures or most of the common cloud failures without any intervention. But more importantly, also make it really easy for the user to use this database by expressing an intent and the database kind of respecting the user's intent and also give a seamless operator experience for day to operations. When you're trying to keep this running in production.

A

We're going to look at a few of these things in detail, but at the core of the database like what we did was instead of being too purist about the exact languages we brought in the features the best features of the two sides of the house. So this on the sequel side, we bring in strong consistency secondary indexes asset transactions, single row, multi row and the expressiveness of the query. Language, where you have where clause and joins, is something we'll continually work toward then add.

A

So that's at the core philosophy and on there is no sequel side we bring in tunable, read, considers read latency so read from my follower or one of my async replicas of the nearest data center. If you want low, read latency but you're, okay with timeline, consistency optimize for large streaming, writes, support, features like automatic expiry of data with time to live kind of feature and be able to scale out and be fault tolerant with your data with primitive to support. How do you partition data?

A

How do you layout data on disk so on and so forth?

A

Okay, so if you take as your cosmos DB as the bleeding edge of no sequel and Google spanner as the bleeding edge of sequel in a cloud-like environment today, what you go by is it brings the best of the two words into a single database, so we're multi model and high-performance, just like Azure cosmos, DB and acid transactional and globally. Consistent like spanner okay, so very briefly on the architecture at the core. It's a scale out database you'll be able to add machines in order to scale it out.

A

Each node has a what is called a doc. Db is what we call it internally, it's a heavily customized version of rock's DB and the nodes in order to replicate data with consistency across nodes. We use raft based replication. We have a global transaction manager in order to do distributed transactions or distinguish it from a single row asset and still keep that highly performant, and we do automatic, sharding and load balancing across all the data.

A

Irrespective of how you access it- and all of this is written in pure C++, so everything is ground-up put together in C++ for high performance and finally, we allow people to access the database through well-known languages as starting points.

A

So we offer Cassandra cql like a standard query language, the Redis API and a and we're working on Postgres as another API, so you'll be able to come in through any of these three api's, each of them on a table in the core data fabric and is able to service, and some of these languages we've actually added extensions, as we see fit like in order to support the use cases we want, for example, in Cassandra we added distributed transactions, so you'll be able to do begin transactions do some stuff in transactions with secondary indexes, JSON data support and so on and so forth.

A

So with that, as what you go by it is it's, it does not have external dependencies, so it can run on premise on a cloud on a VM on a container. It can pretty much run anywhere so on any ayah.

A

All right, so so just a brief intro. Now let me go into what the current state of user bite is and then we can jump into like a demo that, like a shopping, cart on the current state side, we're in zero nine seven publicly available beta, marching towards a1 dot, o generally available version in mark in April timeframe, but we've tested it so far for high scalability.

A

So we've gone up to fifty nodes and we're able to see that you can linearly scale and get millions of reads and write I ops without really sacrificing your latency so like what what you see at fifty nodes is an ADIZ are key value. Like point key value reads so: 2.6 million reads with 200 microsecond latencies and 1.2 million writes with three millisecond, but that's a 3-way replicated consistent right, okay and it's a highly performant database, because that's another of our core pillars.

A

So we tested it against some of the more performant, no sequel, databases like Cassandra. This is a y CSV report of what gigabyte compares with Cassandra and it shows the number of operations per second. So we've taken a lot of. We put in a lot of effort and a lot of learnings from running such systems in production at Facebook, in order to squeeze a lot of performance out of it. But performance is a continuum, it's a it's never-ending, so we will continue to keep improving it.

A

We added distributed transactions, so you'll be able to create a table a Cassandra table, and in this classical banking banker, bank account example. You have a count, name account type balance. You can shard your data by account name, having charted all of the account names and keeping them together, you'll be able to perform cross shard transactions where one account you're able to transfer some money from one account to another account which would potentially live on different nodes, and we do the whole o clock tracking clock, skew, etcetera.

A

This is an actual running system in one of our customers environments. It's like an example of a user login password style set up two copies of the data in uswest, two copies in US, east and one copy in tokyo. The replication factor is five, which mean you need a quorum of three guys in order to do the the right successfully with consistency, and your reads can happen from any of the data centers that are local to you.

A

This setup can actually survive an entire region failure but and give you low latency from any of the different regions, so users, logging in, would be able to log in very quickly but users changing their password would have like electors to the real agencies are in the 200 microsecond range, whereas the right Layton sees are close to 200 milliseconds, even if the- and this is an average across low testers running in on all the three different geographic regions, that's because you have to get quorum to establish consistency and right from Tokyo would invariably take longer to do that.

A

We, like new debate, already works with multiple clouds, so Amazon, Google and on-premise are well tested and Azure is something that we are trying to add support for, but let's jump quickly into our demo- and this is an all kubernetes demo, you restore- is a sample app. That's an online e-commerce book store. You can find it on github, it's it.

A

So it's an open source project as well, so the first thing that I have done and because this is not too terribly interesting to do, live and wait for it to come up is to bring up yoga bite as a kubernetes stateful set, it's a replication factor. 3 set up, so the you go by cluster is 3-way replicated and it's got three nodes in it, and this can be scaled up or down on the fly.

A

The second thing that I did was to bring up the yoga store app. This is a nodejs express and react based app, which simulates a bookstore. So it's like a very simple ecommerce, app. It lists some books, you'll be able to categorize books into some study groups, and so on. So having done that, let me quickly jump into showing you the actual application.

A

Hopefully you guys are able to see the screen. It's it's the kubernetes dashboard and please do say something if you're not otherwise, I'm assuming it's all good. So what you see here, the first, the b3t servers are the slaves. These are the guys that actually serve io. The three masters are background coordinators. There are as many masters as theirs as the replication factor, and the last deployment here is the stateless app deployment so I'm going to go ahead and switch into the you gabite dashboard.

A

So this is actually running inside kubernetes and you see that the different masters have talked to each other and using raft elected one of themselves as the leader and this setup has a replication factor tree. It has one key space with one table in it called products and we're going to look at a demo of how that shows up in the UI. It's got three tea servers and obviously that is scalable on the fly. So, if I go to the oops.

A

Okay yeah, so take me a second here. Sorry.

A

Something go wrong when you do a demo right.

A

Okay, we'll get to that in a second.

A

So like that thing hums- and you guys wouldn't be able to hear me- but it's all good now, so we're back in business. Sorry like that thing really makes a noise on my machine yeah. So this is the tablet service. What you see about this setup is it's all running in a single cloud: single region, single zone. So it's not a multi, but it can very easily be deployed in a multi region, multi zone or a multi cloud fashion.

A

The database internally understands now: let's go to the react app, so this is the app that shows you a list of products or books that are being listed.

A

There's some static categories, so you can look at books just which are the business books, cookbooks, mystery and suspense books and and so on and so forth, and these are more static.

A

Grouping you'll be able to go into any one of these books and be able to see some static content like the title and the description and some dynamic content like the average rating, like the number of stars that people have given on average and the total number of reviews and and so on, and so forth and you'll be able to sort by the dynamic attributes as well, which is you'll, be able to see like what are your towards the book sorted by the total number of stars that you got total number of reviews you have so on and so forth.

A

Right. So so that's the app and I am still working on adding like a checkout on the shopping, cart and that side of things which requires like this tribute extraction, but jumping back to our presentation. So how does gigabyte simplify this right, like typically for the less static content like the best dynamic content like the title and the description SQL like API, like, for example, Cassandra? Is? There is a great choice to store the data, because you'll be able to see most of the attributes.

A

You want and you'll be able to add the ever-growing attributes to like a JSON data type, whereas for the highly dynamic content that changes all the time. Redis is a great example of figuring out the things you want to store like, for example, the average rating or the total number of reviews right. So in you, gabite you'll be able to model your product as a table and run a query such as the one shown, and we will try.

A

This live to be able to select some books from the business category and on at the bottom, you'll be able to use Redis sorted, sets you know and with the actual reviews, as the score to figure out that most reviewed books or the number of stars as a score to figure out the most rated book. Now, let's actually do that.

A

Yeah, okay, so I'm going to connect to this Tootsie server, zero and using a Cassandra shell, and we can actually do a select and figure out the top two books in the business category, which is able to fetch that and you can go ahead and add any number of categories and and and you can alter the table online, upgrade like software online so on and so forth. You can actually reconfigure the database to run on a different set of nodes or regions without taking an application.

A

Downtime I'm going to go ahead and connect to Redis, and so, if you wanted the top ten books by the number of reviews, you can go ahead and run that and that's a read is sorted set. All of this data is being stored as a persistent store inside gigabyte. So you don't need to supplement radius with the data being present in another database. So all of this is just a single database dealing with everything and finally, let's run the equivalent a robot user like like you're like the Bangladeshi click farm. We use a special example.

A

So if you so it's just like viewing products, one after the other and we'll be able to go into our UI here and we'll be able to refresh- and we should start seeing some load getting pumped into the into the various machine. And the point here is that you can add nodes on the fly and the load would get evenly distributed. You can change the setup of the system to run on a different cloud or region and all of this, while the system is online.

A

Ok, can you spare my machine the trouble and go back to the presentation, so you go by database is an Apache v2 project. We follow an open core models.

A

We have a CEO dition, which is everything that I showed you today in the demo, and we have an EE Edition that has the UI deployment like deep integration into the cloud built in metrics and alerting, as well as some features that are more production gate to features such as async replication to remote regions or like tearing of data when you have a lot of data to cheaper tiers. So all of those are in the EE. You can check us out on github. We have a great Docs.

A

You can get started in just a few minutes if you want to give it a spin on your laptop in our next steps in the gigabyte kubernetes journey that we are like if that's on our roadmap and you're working on internally is to build yoga by operator, so that people who are running this in production can do so with great ease and to do an OS, be like open service broker integration so that end users can consume this with. So the first one is making it easier for the other for the operator.

A

The second one is making it easier for the user and as far as you go by itself, our aim is to make yoga by the CNCs sandbox project, because we really think we can simplify the way applications are being developed, especially on the stateful tier, like we can simplify that quite a bit and yeah we'd love to be involved to figure out how to achieve various things like cross region or like local disk pastures or so on and so forth.

A

So that's all I had you there's a please feel free to we just reach out to us, or you can reach out to me with not.

B

To hear from you, but that's.

C

It excellent thank you so much for the presentation that was great. Thank you. We can leave it open for a few minutes for questions. Any of you have to have questions for Karthik.

C

I'll kick one off here. So how long has the the database been available and github got.

A

It so we've been in github for about four months. We've been building the database for about two years, but we've been building it like without having thinking about how to monetize the project or go to market like we didn't want to focus on that. We just wanted to focus on the core problem because it's like a fairly hard problem to solve, and it takes a lot of work to get there. But more recently, like we've, tried to figure out what is the company going to look like what we want to do?

A

It's been out on github for about four months and we're working on. You know like working with a community like kubernetes, where, like the philosophy of what CN CF does and what we want to do, or what we want to achieve is fully aligned. So we want to figure out how to make that even more accessible to developers.

C

You know whatever you can share with us here, be very wording, customers and who's actually or what types of use cases have been looking at this and and why yeah.

A

Great question yeah, so we we've installed yoga bite on about 10 to 15 customers who are trying it out. We have a couple of customers that are going into production. This quarter, we're I mean in fact like we're pretty much like effectively with the promise of keeping backward compatibility.

A

All of that, but we're waiting for these customers to go into production and become referenceable around our 1.0 time, which is going to be the April timeframe, and we expect a few more to come on board and go into production soon after we are being deployed in on-premise, Google, Cloud and AWS. Aw I should reverse the order. Aws, on-premise and Google cloud is the order of number of customers using us that we see use cases.

A

We are closer to going into production for single row, asset use cases, and these are like the FinTech industry, where you have stock, tickers and stock coats, and all of that things like logistics and tracking, which is closer to a real-time IOT like where you want to figure out where vehicles are and how do you want to do the reporting on them? There are some ecommerce sites that are looking at as security and fraud is another place.

A

So it's a variety of different verticals, because the the database itself is it's pretty horizontal, but most of the most of these applications require like two or more of those three pillars which is transactionality, whether it's single row or multi rows. The data consistency is important: distribution across the world, sync async, hybrid deployment, microservices architecture, that side of things and a good performance for being for this being a serving peer.

C

Quiet group today, all right, thank you so much for presenting to us I think that was I was really cool. Looking forward to to working with you guys- and you know please reach out to this towards working group, if you have anything that that you need and kind of looking forward to collaborating with you in the future here awesome. Thank you. Thank you, alright team.

C

So, on to the next agenda item for the day we slated I think last time we took the last half hour to talk a bit about our cue con presence in the EU and I. Think we decided was that everybody needs some more time to think about it. Just a reminder. We had three sessions that we were that were slated for @q Kong.

C

First of all, the the private session is one that, were you know, trying to figure out who's actually gonna be what that's gonna be I, think that the private one was gonna involve possibly getting these numbers from the TOC that come speak with the SPG about. You know what their thoughts are on the working groups and what they'd like to see and try to get like more of a charter from them, so that we could start tackling some of those important things that they they feel like.

C

We should be doing so that that one is is still in discussion and they'll report back on on where that goes. There were two other ones. I think Saad mentioned that the intro session was overlapping with the kubernetes session for intro, and we were working with the program committee to get that moved right now. So I think that one's still ago and I'll, let you guys know when that time gets updated. So it's not conflicting and then the second one was the the deep dive.

C

So the ask the last time was to get people to think about. You know what what kind of a agenda or what they think we should be covering in these two sessions and that's where I wanted to hand it back to the group to chat about so who's got some ideas or comments. They want to share.

C

Mr. Steve watt, you out there they're, always chatty.

F

Cool down time, yeah, Dan and I think specific. For me, I mean I was kind of I. Think that the main thing is you know given I think there was a SOG mention. There was a kubernetes meeting yeah that I thought. Maybe we needed just one CN CF.

A

Meeting.

F

In schedule for storage, the my comment on that was I know it's easy to do a meet-and-greet like theoretically, but my experience having trying to do that is everyone turns up expecting to see a session and then it might be a good idea to do what, like a more outside of the station like outside of the track like an actual, more free, open, meet-and-greet kind of thing.

F

So, like a total of three sessions, one you know, you've got sod on I think his kubernetes session them you've got a CNC F, maybe just you know where we've been, where we're going and catch bugs up with more recent advancements on that the different phases of project acceptance and where there's you know, CN CF projects fits in that and how to get involved with that and then, like I, think that's a more of a presentation style and then we could have more of a high-bandwidth.

F

Just like you know, cheese and wine, maybe both interpretations of the wine and like in another form. You know that, doesn't that's just an idea. I think you might want to consider. Okay.

C

Thanks for that, anybody else have any thoughts on that.

C

Sods got a a plus-one to Steve on that: okay cool.

C

You know, just in terms of people that are going to be present, so I think that you know. We've got these sessions. We can figure out exactly what they're gonna be, but who's interested and actually being involved in more the planning and possibly the delivery for these sessions, like who's actually going to be at the conference.

D

I'll be there and I can help out. Okay.

C

So cool I.

D

Will be there but I'm going to be only on a Thursday does.

C

This all right, yeah.

D

That'll.

C

Work next about you for Thursday.

C

Who else is gonna be there wants to help wants to participate? That.

C

Mr. Brad child's 1%, no.

D

I'm not gonna, be.

C

Present, okay, then, are you gonna be there as well? Oh yeah,.

E

I'll, be there okay.

F

This is Steve like them. I've got, we've got some vacation scheduled on like the Friday, so Tibor, it's just my wife grade out of town. So, if you guys could stuff um I could look into you know, especially if we're focusing on Thursday and then I could travel back, Friday um I could maybe come so just let me know I'm not opposed to it. I was kind of wanting to go in the first place, but you know just I'm just going to figure out the logistics got it. Okay.

C

So, for now it sounds like it's sod, Hospice Steve or it Ben and myself.

C

Then what do you think should we continue trying to chat about this on the swg or do you think we should just set up some separate calls to discuss as a smaller group? Well.

E

I I think to start. Does everybody feel a little bit like we understand that the times and how we want to use them like we're? Not gonna do the both the late night, one right that was kind of up in the air question or I.

C

Mean what we could use that for was to get it some of the TOC members to just chat about things, but that's still up in the air. Whether that's gonna happen. Yeah.

E

And and whether they're gonna be able to join yeah and.

B

Not.

E

Not have dinners.

E

I mean I love, the idea of other TOC members joining I, think you'd probably have Camille and Brian, and some other folks that would be willing to come by and just talk to us WG about what we're trying to do.

E

You know I, think presentations like that are great and that's the BG being a place where we can have these presentations great, but I think it would be great if we also tried to decide and use the time the face-to-face to decide what else. If anything we want the swg to do. You know I think we we left the last face-to-face with some ambitious goals of defining some some stuff around cloud native storage and what it means to operate cloud native storage and I.

E

Don't you know we're all busy and I think that of us have really been able to take that on and I think we should just like make it clear whether or not we want yes WG to have that as part of its mission or not.

E

That's me would be a good, a good use of the time. Sorry.

F

Business and what specifically part of the mission well.

E

We we it's unclear exactly what we want for the swg to be its output to be and III think after the last face-to-face one of the outcomes, at least from the way I interpreted it was, we were gonna, try to.

E

Define you know, you know looser sense. At least cloud native storage from a operations perspective versus from an application consumption perspective. Yes, absolutely and yeah I mean just I, don't know that we've kind of dug back in to that or that anyone's really had the time to do that. I think if we would have done that I think there would have been more clear.

E

You know a more clear mission for the swg, which is to produce yeah.

A

Whether.

E

You want to call it white papers or definitions or whatever you want to call it along those lines, but that's not really something we've done, which is okay and to me, I, think that sort of leaves the group with a little bit of a less defined and less clear mission about what its output is. You know what role it's playing, yeah and I've. Just to me, this face-to-face would be good to just settle on that, even if the role of it is not as ambitious as to find all that other stuff, that's fine.

E

It just wouldn't think to have a clear understanding of them who we want to be and who we don't want to be and what we want to do and don't want to do. 100%.

F

Agree, I think I think that's that's good of the slow each working group is in the critical path of like making any forward progress like it's I feel like we're kind of in like it until we do that, we've sort of been like this paralysis, phase and and I think you know just personally that it's just exactly what you described like is the CNC effleurage working group.

F

You can't see my air quotes, but storage like kubernetes storage sig, which is basically the storage that supports the application platforms, or is it all application, persistence and we've got to decide on what we are and did I have an opinion on that. But I don't want to hijack this meeting to jump into that, but I do think we need to get to the bottom of that.

E

Yeah so so Clinton to answer your question. I would be happy with having one of the sessions just dedicated and devoted to figuring that out and I think we can either try to get feedback from TSE members ahead of time. We can have them be present to also get there they're taking perspective on it, or we can brainstorm ourselves and then go back and say hey. This is what we think we're doing this, who we think we are and but it seems like.

C

So we must good use the time. Do you think that we've got the three sessions right? The one at 8 o'clock is is questionable who we can actually get there. Are you saying that maybe we take one of the the General Sessions and have that be like a roundtable format, or are you saying the 8 o'clock one that we try to tackle up.

E

um I mean I, think it's gonna, be whichever one we're gonna get critical mass at. It sounds.

A

Rather,.

E

Than picking the time I think whatever like I, feel like, we can actually get a sufficient representation of the group and have sufficient coverage from various perspectives and views is.

C

uh Do we think that the like a public audience is going to benefit from seeing some of that I wouldn't call it dirty laundry I think it's just open source process at the end of the day, figuring out what we need to do. What we're gonna do, but is that something we want to be a public session I mean.

E

I think that's perfectly fine if it folks from the public won't want to come in I. Think I, don't think that needs to be any shame and us wanting to better define exactly the how we want the group to run fact. I think all groups should probably be doing this periodically. It's just a continuing reflection on how things are working. Yeah.

C

And.

E

I.

C

Think I guess well what I'm thinking about those it's a in the catalog again, so maybe we just need to make sure that it's well-defined like what exactly the sessions gonna be so that people aren't disappointed as as steep as as said before,.

E

Yeah I mean I, don't I, don't know if.

E

Yeah, what else do we feel we have queued up to talk about? If not this and and I I par Jat apologize, Steve I'd? My phone seems to have disconnected right when you were speaking and then reconnected, and so I completely missed everything you said and all I got back to was Clinton, saying, okay, sod plus ones that so plus.

D

One you you gotta, let some items bear.

F

Now I think I could summarize it quite quickly where I was just basically saying slide as a k-8 session.

F

We should probably have a CNC F presentation rather than a meet-and-greet in the city in the track, because, despite people just from personal experience, despite an organizer wanting to have a meet and greet what tends to happen, is people show up expecting to see a session and they don't talk. And then you just stand up there. Looking weird so and then the the third one in the evening was the casual meet-and-greet you know, and if we can get TST folks, they're awesome, if not like.

F

We just have a opportunity forum for like high bandwidth conversations which we can always use because I think. Like one thing, it was my guess, that's been something the tier sees observed was that it's taken a while to like diffuse exactly how the CNC F works. Like you know, different aspects of the governance model at state sure like I, know personally, like I'm being routinely educated as I. Ask more questions, so I think that's opportunity for more education. Conversation around that, as was good.

C

You know the last time we talked about the sessions I think there were. There are two things that I wrote down from notes, and one thing was that we could have a short presentation setting some context and then we'd have a panel discussion, so open forum, and then the second was that we'd have a like a review of what the s e-g has been discussing as a landscape, and you know this obviously hasn't been ratified and is still you know to be determined.

C

We got at least to express our point of view, and that would be more of a presentation where we put on landscape, describe the different components and some of the projects that would fit inside that last day.

E

Yeah I mean I think that that sounds point Clinton sounds great from from the perspective of if we want to have a public session or we want to bring people in and let them interact with the storage working group and ask questions and learn more about storage.

E

I think that's all great and I think we can do that as perhaps one of the earlier sessions to me the the discussion with the TOC and sort of the internal discussion with a storage working group itself, the burning question is really more about how we want to see the group function and operate going forward and and again I mean sessions like today, where we have great presentations.

E

We can ask questions and we can educate folks I, think it's a great opportunity to share and discover and talk about a lot of the interesting storage projects out there. That can be a completely acceptable. You know decision that we make, which is. This is sort of the extent of what we want the swg to be, but we can also do a lot more and I. Think it'd just be great if we had had clarity for this group for the TOC and for everybody else about any other stuff.

E

That we're trying to do seems like it's a good opportunity to have these discussions face-to-face versus just by the online, but but we could also do it in one of our future. Just calls so I think you know I'll put it back on everybody else, which is if everyone agrees that we want to have those discussions when we want to have them, people won't have them to face-to-face and we want to leave the yeah.

E

Sorry did you do it? Do we want to have him on the calls and we want to leave the face-to-face as more of a hey? Let's educate people out there about what's happening in the storage land. Let's talk about some of the projects that are presented.

E

Let's talk about any type of stuff, let's give perspectives or do we want to use it as a as a working session for the the sword's working group, it's out.

C

Kese both I.

C

Mean I think that at the conference of public sessions, people are gonna, expect I mean you're. Gonna have complete newbies to the area, who are just really interested in storage and I. Think they're probably expect more canned and well presented information so that they can quickly catch up. I, think that you know we've got our needs as a group which are somewhat less like slightly separate, but I feel like we'd accomplished both at the at the event.

C

I think that you know we could take one session could make sure we was having no great intro presentation landscape. You know open panel discussions, so at least we and we'd have a mix of you, know, intro and and more advanced discussion going on there, and then we have maybe that that expert session we use as a face-to-face, which is that roundtable with at TSE members to discuss with SME to can be you.

E

Works for me, I said that the other folks think.

C

Sounds good.

C

So is that is that what we want to do anybody have any objections. Any other ideas.

E

Sounds like um then, we will want to maybe prepare some of that canned content, Clint yeah, which no you and I can kick off, and then we can never creators.

E

Okay,.

C

Excellent, should we call the meeting for data.

C

All right we are, we have next week we have I, believe it's dot mesh presenting not next week, but the next session is dot mesh. Do our first 30 minutes. If anybody has any other storage projects, please do reach out right. We definitely want to get that agenda filled, I definitely enjoy hearing from all the different, interesting storage projects. Like then, what's talking about out there and the ecosystem think it helps educate me on what's going on and so I enjoy it, so you guys have any anything else. You because end up here.

C

Please do let me know so we can pick them on the agenda. Yeah, just a plus.

F

One on the duck nation for those of you that don't know that's Luke and crowd that were the creators of Flocker so pretty relevant to the space, so I'm, pretty smart. The original gangsters of containers.

C

Cool all right well, thank you. Thank you, everyone for your time and reaches back ten minutes in the day.

B

All right take care all.

A

Right.

A

You.

A

You.
youtube image
From YouTube: CNCF Storage WG Meeting - 2018-03-13

Description

Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.