Ceph / Cephalocon Barcelona 2019

Add meeting Rate page Subscribe

Ceph / Cephalocon Barcelona 2019

These are all the meetings we have in "Cephalocon Barcelona…" (part of the organization "Ceph"). Click into individual meeting pages to watch the recording and search or read the transcript.

10 Jun 2019

Nobody Knows What PGs are Good For, Only I Do - Danil Kipnis, 1&1 IONOS Cloud GmbH

While being a member of a team developing a distributed low-latency block-store, the author has been asked the question "why not just use ceph?" so often that he finally forced himself to understand some of the crush/rados basics.

It turned out, the block-store his team was working on was fully "declustered", while the ceph PG-indirection layer allowed to limit the level of "declustering".

PG concept is explained in various online sources but rather from a ceph configuration perspective. In this short talk, it is shown from a developer perspective on a very small example cluster what particular technical problem PG layer solves and how. Specifically, we illustrate the trade-off between cluster utilization during failure recovery and data loss probability due to coincident device failures and show how exactly PGs allow to favor the former or the latter.

About Danil Kipnis
1&1 IONOS Cloud GmbH
Linux Kernel Developer
danil.kipnis@cloud.ionos.com
Danil Kipnis is a Linux kernel developer at 1&1 IONOS Cloud (former Profitbricks GmbH). He works in the storage team designing and developing components for in-house SDS solution centered around a low latency RDMA network. He holds a Master's degree in Computer Science from TU-Berlin. As a researcher at Telecommunication Network Group, TU-Berlin, he published several papers in the area of MAC protocols for wireless sensor networks.

His most recent talk has been at Vault 2017, Linux Storage and Filesystems Conference, where he presented IBNBD - an RDMA network block device driver.
  • 1 participant
  • 6 minutes
throughput
replication
disks
vm
volumes
storages
clustering
distributed
parallel
arrays
youtube image

24 May 2019

200 Clusters vs 1 Admin - Bartosz Rabiega, OVH

Bartosz will explain how and why multi ceph cluster ecosystem was developed at OVH.
Why is it good to have multiple ceph clusters?
How a single member of ceph as a service team at OVH can keep an eye and take care of 200 ceph clusters?
These questions and more will be addressed by Bartosz.

About Bartosz Rabiega
OVH
DevOps Engineer
IT enthusiast...

IT professional since 2011 as QA Specialist, System Analyst and Software Developer. Since 2016 working as DevOps engineer in Ceph as a Service team at OVH (Poland).

Develops and takes daily care of Ceph as a Service project providing 200 ceph clusters.

Passionate Python developer.
Hardware and Software enthusiast.
Likes to understand high and low level designs.
Data visualization - let's do more!
  • 4 participants
  • 36 minutes
servers
devops
provisioning
ovh
infrastructure
operating
maintenance
cloud
safe
docker
youtube image

24 May 2019

A Glimpse of the New Ceph Messenger Built on Seastar - Yingxin Cheng & Vivian Zhu, Intel

Seastar framework brings a new abstraction level over event-driven programming. It is by nature shared-nothing and non-block, hiding asynchronous functions behind its futures and continuations. That’s where the Crimson Project initiated, to liberate Ceph code from complex callbacks and rethink the design of performance from bottom up.

As the base of all other Ceph components which dispatch messages and connected as a cluster, Messenger is very performance-sensitive and also self-contained — a perfect start point for the redesign and validation works in Crimson. Although the new messenger is still under construction, facing challenges with the new programing paradigm and to be really efficient, still plenty can be shared and discussed, including Yingxin's gotchas during his implementations and Jianpeng's performance tests against the existing async-messenger.

About Vivian Zhu
Intel Corporation
Software Engineering Manager
Vivian Zhu is an engineering manager with Intel’s System Software Product Group. She is managing the engineering team focusing on storage-related technologies development and optimization – her recent contributions have been to Ceph, OpenStack Cinder, OpenSDS and Rack-scale design. Vivian holds a Master degree in telecommunication engineering.

About Huamin Chen
Principal Software Engineer, Red Hat
Dr. Huamin Chen is a passionate developer at Red Hat' CTO office. He is one of the founding member of Kubernetes SIG Storage, member of Ceph, Knative, and Rook. He previously spoke at KubeCon, OpenStack Summits, and other technical conferences.
  • 1 participant
  • 37 minutes
throughput
threads
cpus
parallelism
replication
sophisticated
protocol
cluster
messenger
io
youtube image

24 May 2019

A Hitchhiker's Guide to Ceph RGW PubSub & Cats vs. Dogs - a Hybrid Cloud Storage Story - Huamin Chen, Yehuda Sadeh-Weinraub &,Yuval Lifshitz, Red Hat

About Huamin Chen
Principal Software Engineer, Red Hat
Dr. Huamin Chen is a passionate developer at Red Hat' CTO office. He is one of the founding member of Kubernetes SIG Storage, member of Ceph, Knative, and Rook. He previously spoke at KubeCon, OpenStack Summits, and other technical conferences.

About Yehuda Sadeh-Weinraub
Senior Principal Software Engineer, Red Hat
Yehuda is a senior principal software engineer at Red Hat, and has been working on the Ceph project for over 10 years.

About Yuval Lifshitz
Red Hat
Principal Software Engineer
Currently (since 2018) Principal Software Engineer
at Red Hat:
- Kubevirt: Developing networking solutions for
virtual machine payloads in k8s/openshift.
- Ceph: Adding connectivity between the
RadosGW and external message brokers (e.g.
RabbitMQ)
Previously a Senior Software Architect at Sandvine
(2007 - 2018):
- Designing and developing a pluggable policy
engine (C++/Linux/FreeBSD)
- Designing and developing policy charging and
enforcement products based on wireless
specifications (3GPP – Gx, Gy, Rx, Rf, Sd)
- Delegate to standards organization: IETF (co-
author of the update to RFC4006) and 3GPP
  • 1 participant
  • 34 minutes
kubernetes
server
openstack
openness
services
protocols
functionality
providers
gateways
api
youtube image

24 May 2019

Affordable NVMe Performance on Ceph & Ceph on NVMe - True, Unbiased Story to Fast Ceph - Wido den Hollander, 42on & Piotr Dalek, OVH

About Wido den Hollander
42on
Ceph Trainer and Consultant
Netherlands
Websitehttps://42on.com/
Wido den Hollander has been a part of the Ceph Community since 2010. In 2012 he started his company 42on providing Ceph professional services. Since 2012 he works as a full-time Ceph Trainer and Consultant helping organizations with designing, implementing and running Ceph. Bulk storage, high performance, cheap, expensive, he has seen all these variants of Ceph clusters. In these years he has also contributed various patches and features to Ceph and the eco-system surrounding it.

About Piotr Dalek
OVH
Software Engineer
Software engineer with primary focus on systems performance and efficiency. For last two years working at OVH as a software engineer, assisting the storage team and providing Ceph expertise and patches to the Ceph community. Previously working as a software engineer in Fujitsu, working on Ceph-based Eternus CD10000 storage appliance and improving Ceph in the meantime. Authored the several notable improvements to Ceph, including but not limited to: - faster recovery (vastly reduced time required to start recovery after OSD crash) - prioritized recovery (instructing Ceph to recover particular dataset first) - numerous performance improvments, too many to list here
  • 2 participants
  • 45 minutes
provisioning
prepared
chefs
users
self
production
processing
optimal
safety
nvm
youtube image

24 May 2019

Brazilian Government Case - Brenno Martinez, Serpro

Developed by SERPRO, the Brazilian government IT company, the CNH Digital is a project that aims to replace traditional printed driver's license by a phone application. Ceph is responsible for storing drivers' fingerprints, signature and photo. Each license consists of 12 objects, and there are currently 600 million objects stored. There are multiple government services and systems, serving as central identification storage.

About
Brenno Martinez
Serpro
Infrastructure Engineer
Curitiba, Brazil
Websiteserpro.gov.br
SERPRO is Brazil's biggest government-owned IT services corporation. Which has grown developing software and services and was consolidated by improving technologies adopted by several federal, state and municipal public agencies, incorporated into brazilian citizen's life.
Brenno works on a team responsible for providing distributed storage based on CEPH, which supports dozens of solutions.
  • 1 participant
  • 5 minutes
sap
implementing
software
project
os
servers
data
cluster
performance
storage
youtube image

24 May 2019

CRUSH-ing the OSD Variance Problem - Tom Byrne, Storage Sysadmin

Tom will be talking about the challenges of keeping OSD utilization variance under control in a large, rapidly growing cluster.

In the past year he has been managing a large and heavily utilized Ceph cluster that has grown from 1500 OSDs to 5000 OSDs (40PB), while maintaining an average OSD utilization of over 50% throughout the year. This has presented some unique challenges, and Tom will discuss these, along with the positive impact the upmap balancer has had on this process, and general advice for growing near full clusters.

About Tom Byrne
Science and Technology Facilities Council
Storage Sysadmin
Oxford, United Kingdom
LinkedIn Connect
The Science and Technology Facilities Council is a world-leading multi-disciplinary science organisation. We provide access to large-scale facilities across a range of physical and life sciences, enabling research and innovation in these areas. We do world-leading research, and need world-leading computing to facilitate it.

As a storage systems administrator for STFC, I've been working with Ceph for a number of years, and have been part of a team that manages several Ceph clusters for various use cases. The largest cluster currently has five thousand disks and 40PB of raw capacity, and is used to store data for the LHC experiment at CERN.

Running a cluster at this scale has presented a unique set of challenges, and has allowed me to develop an understanding of how to make best use of Ceph’s features to maximise efficiency, data security and performance. Since the last Cephalocon we have tripled the size of our large and very full cluster, going from 1.5k to 4.7k OSDs while continuing to fill the cluster to capacity, which has led to a number of insights into managing the growth of a large, full Ceph cluster.

I've talked previously about our large Ceph cluster at the Cephalocon APAC 2018 (Talk title: Erasure Code at Scale), and have talked about Ceph at various conferences in the field of computing for high energy physics.
  • 2 participants
  • 44 minutes
cern
echo
researchers
rutherford
relays
computing
clusters
hosts
storage
stuff
youtube image

24 May 2019

Ceph Manager Dashboard - The New Way To Manage Ceph & Gateway Management in Ceph Dashboard: From Top to Bottom, Kai Wagner, Ricardo Marques and Ricardo Dias, SUSE Linux

About Ricardo Dias
SUSE Linux
Senior Software Engineer
Ricardo Dias is currently working as a senior software engineer, at SUSE Linux, in the Entreprise Storage Team, where his main task is to contribute to the upstream Ceph storage system project. He is also an Integrated Member at the NOVA LINCS laboratory, where he still colaborates in several research projects, and co-supervises a PhD student with Prof. João Lourenço.

He received his doctoral degree from the Universidade Nova de Lisboa, Portugal, in 2013, under the supervision of Prof. João Lourenço, on the topic of Transactional Memory.
During his research career, he has published, and presented, several research papers in high ranked scientific conferences.

About Kai Wagner
SUSE Linux GmbH
Product Owner
Fulda, Deutschland
Twitter Tweet
Kai is a Product Owner at SUSE, responsible for SUSE Linux Enterprise Storage.

Before joining SUSE Kai Wagner was an administrator and consultant at it-novum GmbH, a German Open Source minded consulting company.

Kai has been working on Windows & Linux, Virtualization, High Availability, Storage and Networking. He's also one of the founders of the openATTIC project which turned from a pure storage management platform to a Ceph cluster and monitoring interface. If he's not working all day long he really enjoys doing some sports and to play with his kids.

About Ricardo Marques
SUSE Linux
Senior Software Engineer
Ricardo Marques has been working for SUSE since 2017, where he started as a contributor of the openATTIC project and is currently contributing to the Ceph Manager Dashboard plugin, applying his experience and passion for web development.

Before joining SUSE, Ricardo has completed his BSc in computer science, and worked on web development for 10 years, using Java related technologies.
  • 6 participants
  • 37 minutes
dashboard
dashboards
version
advanced
introduced
monitoring
demo
v3
miniature
luminous
youtube image

24 May 2019

Ceph Operations at CERN: Where Do We Go From Here? - Dan van der Ster & Teo Mouratidis, CERN

This talk will present a top down view on how Ceph is operated within the large scale research environment of CERN. Scientists at CERN use Ceph in an increasing variety of ways, from block storage for OpenStack to HPC filesystems to S3 object storage. Operating this ~20PB of infrastructure requires continuous measurement and performance tuning to keep the infrastructure running optimally. In this area, we will present our experience tuning and scaling RBD and CephFS, with the latter culminating in the first appearance of Ceph in the IO-500 list. On the operations side, we will present our approach to commissioning and decommissioning hardware, demonstrating some advanced features such as the Ceph balancer. We will conclude by presenting what is upcoming for storage in general at CERN, and present different scenarios how Ceph might play a role in that story.

About Teo Mouratidis
Storage Engineering Fellow, CERN
Teo is works in devops on CERN's storage team. While operating Ceph at large scale, Teo has been contributing to Ceph development in the areas on data balancing and RBD.

About Dan van der Ster
CERN
Storage Engineer
Dan manages the Ceph storage service at CERN in Geneva, Switzerland. He has participated actively in the Ceph community for more than 5 years, being one of the first to demonstrate Ceph's scalability up to multi-10's of petabytes. Dan has spoken at several Ceph Day's and OpenStack Summits, acted as Academic Liaison to the original Ceph Advisory Board, and now has a similar role on the Ceph Governing Board. Dan earned a PhD in Computer Engineering at the University of Victoria, Canada in 2008.
  • 4 participants
  • 31 minutes
cern
demos
configuration
servers
researchers
proposed
stuff
conference
disk
cloud
youtube image

24 May 2019

Ceph Orchestrator: Bridging the Gap Between Ceph and Deployment - Sebastian Wagner, SUSE

The Ceph Manager Orchestrator module provides a unified view across different deployment tools, like Ceph-Ansible, DeepSea and Rook.

This presentation introduces the Ceph Manager Orchestrator module and highlights the benefits of using a single view for managing Ceph services.

I will also feature a demo of managing a Ceph cluster in a Kubernetes environment using the orchestrator module.

About Sebastian Wagner
SUSE
Senior Software Engineer
Sebastian Wagner is a Senior Software Engineer at SUSE, where he has been working for Ceph since 2016. He is the maintainer of the orchestrator module within the Ceph mgr. Sebastian achieved his master's degree in computer science in 2014 at the University of Applied Science Wedel. His previous speaking experience includes talks about Ceph at two Ceph Days and Chemnitz Linux Days.
  • 5 participants
  • 41 minutes
orchestrators
orchestrator
management
maintainer
configuration
operated
gmbh
mode
integrated
sebastian
youtube image

24 May 2019

Ceph Practice And Usage In China Mobile - Zhang Shaowen, China Mobile (Suzhou) Software Technology Co., Ltd

China Mobile has used Ceph storage since 2016. Up to now, China Mobile has built hundreds of petabytes of block and object storage based on Ceph to support its own business. Besides, it also expands to external markets, including finance, education, government, etc. A large number of projects have helped China Mobile accumulate a lot of experience in using Ceph. This proposal introduces the use case and best practice in Ceph according to China Mobile's experience.

About Zhang Shaowen
Senior Engineer, China Mobile (Suzhou) software technology Co., Ltd
I've worked in China Mobile for 3 years and began to work on Ceph in 2016. Now my work is focused on object storage. I've got a little speaking experience, | but haven't got a chance to make a speech in a | big conference.
  • 1 participant
  • 19 minutes
capacity
cloud
stored
services
utilizations
mobile
practices
matters
technology
caching
youtube image

24 May 2019

CephFS as a Scalable Filer - Rafael Lopez & Brett Milford, Monash University

Monash University's eResearch Centre adopted Ceph with the introduction of Openstack over 6 years ago and quickly grew to love Ceph's design, stability, flexibility and community. Ceph now serves as the backbone for not only Openstack storage, but also general online file and object storage for Researchers. This talk will dive into our Ceph journey, experiences and challenges and in particular present how we are making use of CephFS as an all purpose filer for our research community.

About Brett Milford
Research Devops Engineer, Monash University
Brett is an experienced cloud operator, working with Openstack for several years. After moving from Univerisity of Queensland to Monash, he has helped develop and innovate Monash's cloud environment.

About Rafael Lopez
Monash University
Research Devops Engineer
Rafael has developed and administered storage solutions for the past 6 years, working with various enterprise and open source technologies.
For the past couple of years he has been part of Monash University's eResearch centre, maintaining and developing the Ceph environment and other storage systems used by Researchers.
  • 2 participants
  • 40 minutes
monash
universities
campuses
researchers
australia
hi
overview
mana
centre
cfs
youtube image

24 May 2019

Configuring Ceph Deployments with an Easy to Use Calculator - Karl Vietmeier, Intel Corporation

When deploying a Ceph cluster, some of the most common questions include: How much RAM do I need? What is the recommended ratio of OSD storage size to the RocksDB size? Which CPU is the best? How many disks should I include per node? In this presentation, attendees will be shown, step by step, how to use an Excel template to properly answer the above questions and configure a Ceph cluster to meet end-user needs. Attendees will also be given the Excel template to use for their own deployments or in the field when working with end users.

About Karl Vietmeier
Senior Solution Architect, Intel Corporation
I am a Cloud Architect at Intel with a focus on Storage Solutions. | Talk to me about: | NVMe | Object Storage | Linux
  • 1 participant
  • 6 minutes
gigabyte
benchmarking
sizing
4k
intel
32k
configuration
space
nvm
servers
youtube image

24 May 2019

Configuring Small Ceph Clusters for Optimal Performance - Josh Salomon, Red Hat

Ceph storage system is designed and architected for large clusters and huge capacity. Recently we in Red Hat see the need to create smaller clusters for use as part of a containerized environments (K8s / Openshift). In this talk Josh will go over several aspects of ceph configuration which are trickier for smaller clusters (such as balancing) and will explain how to catch inefficiencies and how to solve them.

About Josh Salomon
Red Hat
Senior Principal Software Engineer
Israel
LinkedIn Connect
I am working in Red Hat on Ceph, and I have been working in the storage industry for the last 5 years (in Dell/EMC ScaleIO and Red Hat).
I have more than 20 years experience in application development, including vast experience in development and architecture of enterprise grade distributed applications.
  • 2 participants
  • 41 minutes
cluster
clusters
careful
small
important
capacity
tools
bottlenecks
optimize
logs
youtube image

24 May 2019

Day 2 Operations : Make Friends with Your Ceph Cluster - Adrien Gillard, Pictime Groupe

Setting a Ceph cluster up is now easier and easier with mature configuration management tools and the help of the community.

Still, Ceph is a complex system to manage and operate. Efforts are being made to reduce this complexity but in the meantime, administrators need to understand and apply the right configurations to avoid common caveats, and find the right tools to better understand what is going on with their clusters, in order to allow a smooth experience for their users.

This presentation focuses on the day to day operation of a Ceph cluster, the tools and best pratices needed to spend your week-ends with friends and family and not debugging your storage.

About Adrien Gillard
Pictime Groupe
Systems Engineer
Adrien is a systems engineer in the Office of the CTO of the French company Pictime Groupe, a managed service provider and hoster specialised in retail and healthcare.

Adrien participated in building and operating the managed cloud platform of Pictime Groupe and deployed several Ceph Clusters to address secondary storage needs.

Besides his interests in Ceph and storage in general, he now also works on all things container and continous delivery.
  • 1 participant
  • 34 minutes
logs
logging
provisioning
cluster
centralize
enterprise
dashboards
servers
infrastructure
osd
youtube image

24 May 2019

Exploring the Performance Limits of CephFS in Nautilus - Manoj Pillai, Red Hat

Ceph, with its broad adoption and its integration with various platforms, is expected to handle a diverse set of workloads well. Storage workloads in the real world come in many different flavors: caching-friendly and not, data and metadata intensive, latency-sensitive and throughput-oriented, single and multi-threaded, mmap-based, aio-based.

In this talk, Manoj Pillai will present performance results and analysis for CephFS in the Nautilus release using a broad spectrum of tests covering the above cases. The evaluation will include workloads that distributed file systems generally have trouble with, and will provide results from comparable solutions where appropriate. The goal is to establish the current state of performance of CephFS, which should be useful to users, as well as to developers working to enhance CephFS performance.

About Manoj Pillai
Senior Principal Software Engineer, Red Hat
Manoj Pillai is part of the Performance and Scale Engineering Group at Red Hat. His focus is on storage performance, particularly distributed storage systems. He has presented his work at a number of conferences including Vault, Open Source Summit and FOSDEM.
  • 1 participant
  • 31 minutes
throughput
performance
bottlenecks
efficient
storage
nfs
improvements
gigabytes
scalability
server
youtube image

24 May 2019

Failing Better - When Not To Ceph and Lessons Learned - Lars Marowsky-Brée, SUSE

Talks on where and how to utilize Ceph successfully abound; and rightly so, since Ceph is a fascinating and very flexible SDS project. Let's talk about the rest.
What lessons can we learn from the problems we have encountered in the field, where Ceph even may have ultimately failed? Or where the behaviour of Ceph was counter-intuitive to user expectations? And if so, was Ceph suboptimal or the expectations off?
Drawing from several years of being the engineering escalation point for projects at SUSE and community feedback, this session will discuss anti-patterns of Ceph, hopefully improving our success rate by better understanding our failures.

About Lars Marowsky-Brée
SUSE
Distinguished Engineer
Berlin Area, Germany
Twitter Tweet
Lars works as the architect for Ceph & software-defined-storage at SUSE. He is a SUSE Distinguished Engineer and represents SUSE on The Ceph Foundation board.
His speaking experience includes various Linux Foundation events, Ceph Days, OLS, linux.conf.au, Linux Kongress, SUSECon, and others. Previous notable projects include Linux HA and Pacemaker.
Lars holds a master of science degree from the University of Liverpool. He lives in Berlin.
  • 1 participant
  • 41 minutes
considerations
providing
clients
sustainable
capacity
trouble
ceph
centralization
project
self
youtube image

24 May 2019

Geographical Redundancy with rbd-mirror: Best Practices, Performance Recommendations, and Pitfalls - Florian Haas, City Network

rbd-mirror, introduced in the Jewel release, is a means of asynchronously replicating RADOS block device (RBD) content to a remote Ceph cluster. That's all fair and good, but how do I use it? How exactly does the rbd-mirror daemon work, what's the difference between one-way and two-way mirroring, what authentication considerations apply, and how do I deploy it in an automated fashion? How is mirroring related to RBD journaling, and how does that affect my RBD performance? And how do I integrate my mirrored devices into a cloud platform like OpenStack, so I can achieve true site-to-site redundancy and disaster recovery capability for persistent volumes?

This talk gives a run-down of the ins and outs of RBD mirroring, suggests best practices to deploy it, outlines performance considerations, and highlights pitfalls to avoid along the way.

Slides for this talk are at https://fghaas.github.io/cephalocon2019-rbdmirror/

About Florian Haas
City Network
VP Education
Austria
Websitehttps://fghaas.github.io/
Florian runs the Education business unit at City Network, and helps people learn to use, understand, and deploy complex technology. He has worked exclusively with open source software since about 2002, and has been heavily involved in OpenStack and Ceph since early 2012, and in Open edX since 2015. He co-founded hastexo, an independent professional services company, and served there as CEO and Principal Consultant until its acquisition by City Network in October 2017.
  • 1 participant
  • 38 minutes
mirroring
mirror
mirrored
mirrors
rvd
replications
hosts
troubleshooting
backup
journaled
youtube image

24 May 2019

Getting Started as a Rook-Ceph Developer - Blaine Gardner, SUSE

Kubernetes. We've all heard the name. It's the advanced container orchestration platform that lets you run your container applications at any scale. But can Kubernetes help make Ceph orchestration better? To answer that, let's have a look at Rook. Rook is the upstream project the Ceph community has been working with to make Ceph on Kubernetes a reality. We'll explore the benefits Rook gives Ceph, the challenges we have faced, the challenges we have yet to face, and our current vision for the future of Rook.

About Blaine Gardner
Ceph Containerized Storage Engineer, SUSE
I am a maintainer for the Rook upstream project's Ceph backend, and I work for SUSE as lead software engineer on the Enterprise Storage team's efforts to containerize Ceph and run it on Kubernetes. My go-to random fact is that beavers are nocturnal.
  • 3 participants
  • 34 minutes
kubernetes
docker
rook
pod
daemon
containerization
deployments
staff
nodes
workflows
youtube image

24 May 2019

Hands On with Rook: Ceph & Kubernetes - Maxime Guyot, Root Pi & John Studarus, Packet Host

This is a hands-on tutorial walking through the use of Cephia via Rook, a storage orchestration service for Kubernetes. Each attendee will be provided a deployed Kubernetes cluster on bare metal and will walk through setting up Ceph via Rook across the bare metal SSD resources and how that storage is presented to Kubernetes clusters. We will then run through some scaling up/down of the underlying storage infrastructure as well as failing storage devices to showcase recovery.

About John Studarus
JHL Consulting
Cloud Architect
Greater San Diego Area
Website
Twitter Tweet LinkedIn Connect
For the last twenty years, John has been providing technical management services building and evaluating complex distributed systems across the telecommunications, pharmaceutical, and financial services industries. He's a graduate of the University of California, San Diego and Carnegie Mellon University.

Recently John has been developing the software ecosystem to support applications running at tower based edge locations. This has revolved around testing and modifying cloud and container-based open source software, such as Kubernetes and OpenStack, to easily deploy and utilize the bare metal compute, network, and wireless infrastructure across these edges.

John runs a number of CNCF and Open Infrastructure meetup groups across Southern California, volunteers as an Ambassador for the OpenStack Foundation and serves on the Carnegie Mellon Information Networking Institute (INI) alumni board.

About Maxime Guyot
Root Pi
Cloud Consultant
Maxime is a cloud architect and engineer passionated with IT and open source technologies. He specializes in Software Defined Infrastructure using Openstack, Kubernetes, and Ceph. He's a contributor to Open Source projects such as Kubespray and Service Catalog. For fun, he likes to build CI systems across as many public clouds as he can get accounts.
  • 5 participants
  • 1:21 hours
workshop
provisioning
dos
prepares
labs
instructions
terminal
debugging
home
rook
youtube image

24 May 2019

Healthier Ceph Clusters with Ceph-medic - Alfredo Deza, Red Hat

ceph-medic is a small project that helps identify issues with a Ceph cluster that may be difficult to detect, even when using automation. The project lead will demonstrate how one can quickly discover issues with a running cluster, regardless of the deployment type (bare metal, containerized, or kubernetes), emphasizing on the importance of good reporting, clarifying errors or warnings for better deployments.

About Alfredo Deza
Redhat
Principal Software Engineer
Alfredo Deza is a principal software engineer working for RedHat on the Ceph distributed storage system, avid open source developer, unit test enthusiast, Vim plugin author, photographer and former athlete.
As a passionate knowledge-craving developer he can be found giving
presentations in local groups about Python, file systems and storage, and
system administration. Currently leading the development of ceph-volume, ceph-medic, and the build and release infrastructure for Ceph.
  • 1 participant
  • 5 minutes
troubleshooting
deployment
error
cluster
help
tricky
osd
server
docker
qa
youtube image

24 May 2019

Highly Available Git on CephFS with Rook, Kubernetes, and OpenStack - James E. Blair, Red Hat

The OpenDev Project operates infrastructure for some of the largest
and most active Open Source projects. It needs a bulletproof system
for serving the git repositories for those projects, and it needs to
be entirely open source.

This presentation will show how OpenDev uses Kubernetes and Rook to deploy an entirely virtualized Ceph cluster and CephFS to serve git
repositories. The cluster is fully integrated with the OpenStack
cloud provider it runs in, so that OpenStack automatically provides
load balancing and the virtualized block storage that supports the
Ceph cluster. The deployment process is automated with Ansible and
allows for easy experimentation and testing since the entire system
can be recreated in a matter of minutes.

About James E. Blair
Red Hat
Principal Software Developer
James works in the office of the CTO at Red Hat, is a founding member of the OpenStack project infrastructure team and the project lead for the project gating system Zuul. As a sysadmin and hacker he gets to write elegant code and then try to make it work in the real world. He has been active in free software for quite some time, and has previously worked for UC Berkeley and the Free Software Foundation.
  • 3 participants
  • 42 minutes
openstack
openstax
dev
open
accessible
developers
vexos
hosted
software
github
youtube image

24 May 2019

I Need More Space, It's Not You It's BlueStore - Mohamad Gebai, SUSE

Users are sometimes confused when it comes to the used space reported by Ceph. This talk will help users understand the difference between expected, perceived and actually used space for an RBD on BlueStore use case. The examples shown in this talk are inspired by frequently asked questions on the Ceph mailing list by new users.

About Mohamad Gebai
SUSE
Software Engineer
Montreal
Websitelinuxmogeb.blogspot.com
I am a software engineer at SUSE, working on the SUSE Enterprise Storage product which is based on Ceph. I was previously part of the Azure Storage team at Microsoft.

My main area of focus is performance I have a background in tracing and monitoring tools on Linux, both in kernel and user spaces.
  • 1 participant
  • 6 minutes
capacity
space
allocation
storage
terabyte
disk
provisioning
fdf
data
vms
youtube image

24 May 2019

Juggling Petabytes: Managing Ceph at Scale with Ceph-ansible - Matthew Vernon, Wellcome Sanger Institute

The Wellcome Sanger Institute has 18PB in its largest Ceph cluster. This talk will explain how the Sanger used Ceph to build and scale a reliable platform for scientific workflows, and enable secure data sharing via S3. And how they got 100GB/s read performance out of their cluster.

Matthew will outline the interesting aspects of the Sanger's Ceph setup, including how the team grew it from a small initial installation, automated deployment management and monitoring, and some of the issues they have encountered along the way. Matthew will also explore some of the good (and less good!) aspects of running Ceph at scale.

About mcv21
Wellcome Sanger Institute
Principal System Administrator
Matthew Vernon is a Principal System Administrator at the Wellcome Sanger Institute, and a member of the HPC team. As well as traditional HPC farms, the team supports an OpenStack platform, and 3 Ceph clusters, the largest of which has 18PB of raw capacity. Matthew's current work is largely around the management of the Ceph clusters. Matthew has been a Debian developer since 1999, and has a PhD in "spatial spread of farm animal diseases"; he has spoken at a number of scientific and technical conferences in Europe and the USA.
  • 1 participant
  • 40 minutes
cern
institute
scientists
lsf
project
cluster
specs
warned
advance
dossing
youtube image

24 May 2019

Keeping up a Competitive Ceph/RadosGW S3 API - Javier Muñoz, Igalia

RadosGW S3 is the service layer compatible with the Amazon Simple Storage Service API (Amazon S3) in Ceph. Some users and companies adopt Ceph and use this service layer to build digital products/services that compete with other services, APIs and technologies in the object storage market.

This talk shares the experience of contributing new features and bugfixes upstream in RadosGW that were developed through open projects in the community.

The talk reviews some of the contributions made by the author from Jewel to Nautilus and its impact from a product/service point of view for the different parties.

About Javier Muñoz
Igalia
Software developer
Spain
Websiteblogs.igalia.com/jmunhoz
Javier works as a Computer Engineer and Software Developer at Igalia, an open source consultancy specialized in the development of innovative projects and solutions. He joined the Ceph community in 2015.

Javier is part of the 'Cloud & Virtualization' team in Igalia where he develops new functionalities and solves bugs upstream in Open Source projects such as Ceph, Apache Libcloud, Ansible, etc.
  • 1 participant
  • 26 minutes
ai
amazon
italy
parti
considered
functionality
services
saleem
complexi
goods
youtube image

24 May 2019

Keynote: A Wonderful Journey On Ceph - Luo Kexue, Senior Software Engineer, ZTE Corporation

As a participant in the Ceph community, ZTE has been actively contributing to the Ceph open source community and has built a distributed storage solution CloveStorage based on Ceph. In this talk, ZTE will share their wonderful journey on Ceph, including challenges and solutions.

About Luo Kexue
Senior Software Engineer, ZTE Corporation
Currently Luo Kexue is working for ZTE Corporation as a senior software engineer and R&D team leader of storage. He has been working on Ceph for over 5 years, now he is focusing on distributed storage solutions.
  • 1 participant
  • 8 minutes
city
zte
company
community
small
management
developers
problems
started
luo
youtube image

24 May 2019

Keynote: Ceph Journey - A Perspective - Dr. Gerald Pfeifer, Chief Technology Officer, SUSE

Ceph is the number one Open Source Software-defined Storage solution for scale-out applications. But you already knew that and it’s the reason why you’re at Cephalocon!

As a key contributor to the Ceph project, SUSE will share our experiences and customer feedback on how we make it easier to consume and more accessible for enterprise use cases. We will explain why being part of the community is crucial to our journey and discuss key milestones along the way.

We will leave the audience with a sneak peek at future focus areas on the path to a fully self-managed, easy to use, global, cloud-native storage platform.

About Dr. Gerald Pfeifer
SUSE
Chief Technology Officer
As CTO Dr. Gerald Pfeifer leverages his deep understanding of infrastructure software, Open Source ecosystems, and respective business and technology aspects to help articulate, drive, and promote SUSE's technology vision, engaging with customers, partners, and Open Source communities all along the way.

VP Products & Technology Programs until 2019, Dr. Pfeifer drove the transformation of the SUSE portfolio from the world's first Enterprise Linux distribution to software-defined infrastructure including OpenStack cloud, Ceph storage, and networking solutions plus container-based application delivery around Kubernetes and Cloud Foundry. He also led partner-facing engineering teams, and in the early days served as project lead for Enterprise Linux and helped create SUSE's first offering for developers in 2004.

He has a long history in infrastructure and Open Source software and still contributes to key projects such as the GNU Compiler Collection and Wine. Before joining SUSE in 2003 he was Senior Researcher at C.I.E.S./University of Calabria, Italy, and Assistant Professor at Vienna University of Technology, Austria, where he received his doctorate and equally enjoyed research and teaching.
  • 1 participant
  • 17 minutes
safe
caution
2019
ahead
important
intelligent
sage
openstack
introduced
container
youtube image

24 May 2019

Keynote: Ceph as Part of the Data Infrastructure for Zoned Storage - Jorge Campello De Souza, Sr. Director, System and Software Technologies, Western Digital

Zoned Block Devices is a recently introduced new category of storage devices introduced to address the needs of large-scale data infrastructures. In this presentation we will describe Zone Block Devices and how Ceph is well positioned to be a part of the data infrastructure that takes advantage of these new technologies.

About Jorge Campello De Souza
Western Digital
Sr Director, System and Software Technologies
Jorge Campello is a Senior Director of Systems and Software Technologies at Western Digital Research. He holds a PhD degree in Electrical Engineering from Stanford University. He has 20 years of experience in the Data Infrastructure industry. His interests include distributed storage, open source, emerging NVM technology, artificial intelligence, information theory and security.
  • 1 participant
  • 10 minutes
storage
smr
devices
disk
dram
ssds
capacity
infrastructure
data
saif
youtube image

24 May 2019

Keynote: Pushing the Limits of Ceph Performance through Software and Hardware Innovations - Tushar Gohad, Principal Engineer, Intel Corporation

In the 2018 Ceph survey (https://ceph.com/wp-content/uploads/2018/07/Ceph-User-Survey-2018-Slides.pdf slide #56), the number one request to the question “Where should the Ceph community focus its efforts?” was “Performance”. In response, Intel is leading a community effort in driving a series of innovations in the upstream codebase as well as hardware technologies that can provide Ceph users with more IOPS, lower tail latencies, and lower-cost all-flash capacity storage. In this session, Intel Principal Engineer Tushar Gohad will discuss the upstream code contributions that remove performance bottlenecks and enable new use cases. Additionally, Tushar will discuss recent Intel hardware technologies that provide better performance and better value solutions than previous generations.

About Tushar Gohad
Intel Data Center Group
Principal Engineer
United States
Tushar is a Principal Engineer, Software Architect with Intel's Data Center Group. He has had a long career working on open-source networking and storage-related technologies. His recent contributions have been to Ceph, Storage Performance Dev Kit (SPDK) and networking in the Linux
  • 1 participant
  • 12 minutes
storage
capacity
intel
deployments
ssds
architectures
containerization
appliances
self
sif
youtube image

24 May 2019

Keynote: State of the Cephalopod - Sage Weil, Co-Creator, Chief Architect & Ceph Project Leader, Red Hat

A welcome to Cephalocon Barcelona, and an update from the Ceph project leader on recent developments, current priorities, and other activity in the Ceph community.

About Sage Weil
Red Hat
Ceph Project Leader
Madison, WI, USA
Twitter Tweet
Sage helped build the initial prototype of Ceph at the University of California, Santa Cruz as part of his graduate thesis. Since then he has led the open source project with the goal of bringing a reliable, robust, scalable, and high performance storage system to the free software community.
  • 1 participant
  • 36 minutes
ceph
cephalic
cern
barcelona
thank
sponsors
conference
stuff
people
staff
youtube image

24 May 2019

Keynote: Supporting Swiss Academia with Ceph & OpenStack - Jens-Christian Fischer, Team Lead, Infrastructure & Data, SWITCH

SWITCH is the Swiss National Research and Education Network (NREN) and has been operating a Ceph & OpenStack based IaaS called SWITCHengines for higher education in Switzerland since 2014. In this talk, we describe our setup, the use cases we support and the experiences we have with running multiple multi Petabyte Ceph clusters in production. For Science!

About Jens-Christian Fischer
SWITCH
Team Lead, Infrastructure and Data
Jens-Christian is the team lead of the "Infrastructure and Data" team at SWITCH that is responsible for developing, building and operating SWITCHengines. He has a background in software development and agile project management. He holds an MSc in IT from the University of Liverpool.
  • 1 participant
  • 23 minutes
enron
researchers
switch
cern
brain
science
future
sef
extremely
presentation
youtube image

24 May 2019

Keynote: The System that Matters - Tim Massey, Chief Executive Officer & Phil Straw, Chief Technology Officer, SoftIron

About Phil Straw
CTO, SotfIron
Phil Straw is the CTO of SoftIron, the Silicon Valley company behind HyperDrive® – the dedicated Ceph appliance, purpose-built for software-defined storage. Previously he has held senior technical roles with Security, Delphi Electronics, 3Com and Cisco.

About Tim Massey
CEO, SoftIron
Globally responsible for all business functions of SoftIron. Previously General Manager at Leadis, founder and CEO at Mondowave, and Principal at Band of Angels Fund L.P.
  • 2 participants
  • 16 minutes
capacity
sefa
appliances
iron
software
technologist
innovating
silicon
soft
open
youtube image

24 May 2019

Keynote: Town Hall - Panel

This will be a town hall panel with the Ceph Component leads. Please submit questions ahead of time to the etherpad, or ask them during the session. https://pad.ceph.com/p/cephalocon-2019-town-hall.
  • 8 participants
  • 35 minutes
editor
programmer
developers
software
tools
introduce
terminal
ide
screen
fest
youtube image

24 May 2019

Keynote: What's Planned for Ceph Octopus - Sage Weil, Co-Creator, Chief Architect & Ceph Project Leader, Red Hat

About Sage Weil
Red Hat
Ceph Project Leader
Madison, WI, USA
Twitter Tweet
Sage helped build the initial prototype of Ceph at the University of California, Santa Cruz as part of his graduate thesis. Since then he has led the open source project with the goal of bringing a reliable, robust, scalable, and high performance storage system to the free software community.
  • 1 participant
  • 18 minutes
octopus
prioritization
stuff
kubernetes
provisioning
workflows
addressing
upgrades
cluster
io
youtube image

24 May 2019

Keynotes: Supermicro® SuperStorage Systems Based on New Intel® Xeon® Scalable Processors with NVMe and Intel® Optane™ DC Persistent Memory - David Ramirez, Field Application Engineer, SuperMicro Computer Inc.

Supermicro offers the industry's widest selection of server hardware to ensure cloud providers have control of their environment using highly flexible hardware solutions suited to specific requirements. With resource-saving top of mind, the hardware is designed to reduce CAPEX, TCO and TCE - Total Cost to the Environment - with no compromise on performance.

About David Ramirez
SuperMicro Computer Inc.
Field Application Engineer Spain & Portugal
David Ramirez is a Field Application Engineer at SuperMicro Computer Inc. Focused on global solutions and HW technology, especially in software-defined storage and low latency storage/network solutions. Recently, for the second consecutive year. David was a speaker at the national CEPH Spanish meeting.
  • 1 participant
  • 9 minutes
gigabyte
cpus
disks
hardware
storage
nvme
ndc
servers
ram
speed
youtube image

24 May 2019

Learn Ceph — For Fun, For Real, For Free! - Florian Haas, City Network

Since early 2018, City Cloud Academy has offered an entirely self-paced Ceph Distributed Storage Fundamentals course with fully interactive labs at no cost to 25 community members each on a first-come, first-served basis. We're making it easy to get your first start on Ceph, and we're looking for feedback on how to get better!

About Florian Haas
City Network
VP Education
Austria
Websitehttps://fghaas.github.io/
Florian runs the Education business unit at City Network, and helps people learn to use, understand, and deploy complex technology. He has worked exclusively with open source software since about 2002, and has been heavily involved in OpenStack and Ceph since early 2012, and in Open edX since 2015. He co-founded hastexo, an independent professional services company, and served there as CEO and Principal Consultant until its acquisition by City Network in October 2017.
  • 1 participant
  • 6 minutes
conference
discussion
lecture
instructor
talking
consultant
advanced
seth
immersive
admittedly
youtube image

24 May 2019

Making Ceph Fast in the Face of Failure - Neha Ojha, Red Hat

Ceph has made a lot of improvements to reduce the impact of recovery and background activities on client I/O. In this talk, we'll discuss the key features that affect this, and how Ceph users can take advantage of them.

About Neha Ojha
Senior Software Engineer, Red Hat
Neha is a Senior Software Engineer at Red Hat. She is the project | technical lead for the core team focusing on RADOS. Neha holds a | Master’s degree in Computer Science from the University of California, Santa Cruz.
  • 3 participants
  • 35 minutes
recovery
failures
log
process
improving
throughput
important
intervene
adaptive
osd
youtube image

24 May 2019

MeerKAT Astronomy Data Store Deployment and Operations - Martin Slabber, SARAO

MeerKAT, inaugurated on the 13’th of July 2018 under the auspices of SARAO, is a radio telescope consisting of 64 antennas which has been built in the Northern Cape, South Africa.

In this talk, Martin will describe the hardware, software stack, deployment and operation tools used on the MeerKAT data archive.

MeerKAT currently has two Ceph clusters, one on-site at the telescope in the arid Karoo and the second larger cluster in Cape Town.

The Cape Town cluster consists out of 2640 hard drives and 55 NVMe devices. The Karoo cluster consists out 240 hard drives and 480 solid state disks.

The clusters were built by a small team of engineers and as much as possible of the deployment and operation has been automated.

About Martin Slabber
Engineer/DevOps, SARAO
Martin Slabber is an Electronics Engineer turned Software Engineer turned DevOps for the Science Data processing team at SARAO. SARAO recently completed the construction of MeerKAT the largest radio telescope in the world.
  • 1 participant
  • 42 minutes
astronomy
astronomers
astronomer
telescope
astronomical
galaxy
science
stars
advanced
cern
youtube image

24 May 2019

Messenger V2: The New Ceph Wire Protocol - Ricardo Dias, SUSE Linux

In this talk we will present an overview of the current design and implementation of the new wire protocol (the protocol used for communication between pairs of daemons and daemons and clients), named Messenger V2, that aims to overcome the limitations of the current protocol, such as the possibility to encrypt all data transferred on the wire.

We will start by describing the current wire protocol design and pinpoint its limitations, which will pave the way to present the new protocol features. Then we will present the details of the new protocol design, it's new features, and how to deal with clients from older Ceph versions. We will also talk briefly about some of the possible future features that can be implemented by extending the base protocol.

About
Ricardo Dias
SUSE Linux
Senior Software Engineer
Ricardo Dias is currently working as a senior software engineer, at SUSE Linux, in the Entreprise Storage Team, where his main task is to contribute to the upstream Ceph storage system project. He is also an Integrated Member at the NOVA LINCS laboratory, where he still colaborates in several research projects, and co-supervises a PhD student with Prof. João Lourenço.

He received his doctoral degree from the Universidade Nova de Lisboa, Portugal, in 2013, under the supervision of Prof. João Lourenço, on the topic of Transactional Memory.
During his research career, he has published, and presented, several research papers in high ranked scientific conferences.
  • 2 participants
  • 37 minutes
protocol
message
messenger
communication
wire
security
sef
v2
version
firewall
youtube image

24 May 2019

Monitoring Ceph with Prometheus - Jan Fajerski, SUSE

Monitoring a clustered storage solution like Ceph is essential for the sanity of
everyone involved. Prometheus offers scalable monitoring and alerting for highly
dimensional time series data. After short introductions to both systems, this
talk will cover configuration, deployment and basic usage briefly. The remainder
of the session will dive into more interesting topics like Alerting, the brand
new rbd client metrics and correlating metrics from multiple exporters. The
latter enables the creation of effective dashboards, like an OSD dashboard that
includes SMART stats for a given OSD instance.

About Jan Fajerski
SUSE
Senior Software Engineer
I have the good fortune to work for SUSE's Enterprise Storage team, which allows me to contribute to ceph on a regular basis. I have previously spoken about ceph and projects related to its ecosystem at several conferences and user group meetings, like FOSDEM, Ceph Day Germany and local meetup events.
  • 1 participant
  • 33 minutes
prometheus
monitoring
performance
considerations
updates
instrumentation
prompt
dashboards
trends
endpoints
youtube image

24 May 2019

Object Bucket Provisioning in Rook-Ceph - Jonathan Cope & Jeff Vance, Red Hat

While Kubernetes internally supports a generalized API for managing file and block storage, S3 object storage is fundamentally lacking. Rook, a cloud native storage orchestrator, has brought several S3 object storage providers into the Kubernetes ecosystem, including Ceph-Object. What Rook lacks is a generalized Kubernetes S3 API for bucket provisioning. We are designing and implementing such an operator for Rook. This operator provides a generalized S3 bucket provisioning API for Kubernetes users via a set of Custom Resource Definitions. Through these CRDs, Ceph-Object consumers can utilize Kubernetes to provision and manage their Ceph-Object buckets. This presentation focuses on the design goals and use-cases for native Rook bucket provisioning, and some bucket CRD implementation details.

About Jonathan Cope
Senior Developer, Red Hat In.c
Jon has been working at Red Hat for 5 years and lives and works in Austin, Tx. He is a senior developer with a long-time focus on Kubernetes storage and recently on Rook-Ceph. His current project is designing and implementing object bucket provisioning.

About Jeff Vance
senior developer, Red Hat
Jeff has been working at Red Hat for 7 years and lives and works in Santa Cruz, Ca. He is a senior developer focusing on Kubernetes and Openshift storage. He is currently involved with various object stores and incorporating them into both Kubernetes and Rook-Ceph.
  • 5 participants
  • 28 minutes
kubernetes
provisioners
demoing
implementations
rook
hands
presented
knowledge
stuff
handles
youtube image

24 May 2019

Object WORM Feature in Ceph Rados Gateway - Zhang Shaowen China Mobile (Suzhou) Software Technology Co., Ltd

"Write Once Read Many"(WORM) model prevents users' data from being deleted or overwritten to meet regulatory requirements. This is very suitable for financial, insurance, online collaboration and other fields. Amazon, Google and Alibaba all support WORM feature in their object storage products which greatly expands the scope of use of object storage. This proposal introduces the WORM feature in Ceph radosgw gateway developed by China Mobile based on Amazon S3. It will introduce the design and usage for object WORM feature and is hoped to make some help for audience to understand why and how to use this new feature.

About Zhang Shaowen
Senior Engineer, China Mobile (Suzhou) software technology Co., Ltd
I've worked in China Mobile for 3 years and began to work on Ceph in 2016. Now my work is focused on object storage. I've got a little speaking experience, | but haven't got a chance to make a speech in a | big conference.
  • 1 participant
  • 14 minutes
warm
stored
storage
tempered
readings
gentle
logical
technology
captivity
protection
youtube image

24 May 2019

Optimize librbd for Lower CPU Cost and Higher Scalability - Li Wang, DiDi

This talk will introduce our work to reduce CPU overhead of qemu+librbd stack, which includes use rbd_aio_writev instead of rbd_aio_write for qemu rbd driver, and further optimize rbd_aio_writev to use zero copy to send data, these optimizations lead to 48% less cpu cost, 46% less latency, and 85% higher iops for 1M sequential write. In addition, we improve the scalability of librbd by using multiple writeback threads, and reduce the granularity of rbd cache lock.

About Li Wang
Senior Technical Expert, DiDi
Li Wang is a senior technical expert in DiDi.
  • 1 participant
  • 26 minutes
mbd
bt
ba
bb
ssb
operating
throughput
liberty
client
pd
youtube image

24 May 2019

Optimizing Ceph Object Storage for Production in Multisite Clouds - Michael Hackett & Vikhyat Umrao, Red Hat

Today, more than 60% of data storage involves unstructured data—including images, video, audio, and other types of data. The Ceph object gateway provides an object storage solution with S3 and Swift APIs that is ideal for storing unstructured data in multisite and hybrid cloud storage scenarios, scaling up to petabytes and beyond.
In this presentation we will look at sizing a Ceph Object Gateway cluster, identifying performance requirements, identifying suitable hardware and configuring storage policies with varying performance characteristics. We will also look at Ceph features such as Erasure Coding, Compression bucket index sharding and integration with OpenStack.
Failover and recovery options with also be touched upon with discussion on RGW multisite deployments.

About Vikhyat Umrao
Principal Software Engineer, Red Hat
Vikhyat Umrao works for Red Hat a a principal Software maintenance engieer on the Ceph Support team. He has co-authored the Ceph Cook Book and has presented at Red Hat Summit, Cephalocon and OpenStack conferences.

About Michael Hackett
Red Hat
Principal Software Engineer
Westford, MA
Michael Hackett is a storage and SAN expert in customer support. He has been working on Ceph and storage-related products for over 13 years. He co-authored the Ceph Cook Book and holds several storage and SAN based certifications. Michael has presented at several Red Hat Summits and last years Cephalocon
Michael is currently working at Red Hat, based in Massachusetts, where he is a principal software maintenance engineer for Red Hat Ceph Storage and the technical product lead for the global Ceph team.
  • 5 participants
  • 44 minutes
storage
stored
backups
capacity
servers
ram
stack
ssd
applications
openstack
youtube image

24 May 2019

Per OSD Recovery Bandwidth Control Based on dmClock - Xie Xingguo & Yan Jun, ZTE Corporation

We represents a prototyping of bandwidth control strategy of recovery activities at per-osd granularity by utilizing dmclock, an algorithm that implements distributed quality of service. The benefit is that we're now be able to limit the bandwidth of recovery activities in any form, thereby greatly reducing their impact on our existing client-side services.

About
Yan Jun
principal software engineer, ZTE corporation
Experienced Ceph developers, Ceph member and distributed QoS expert.

About Xie Xingguo
principal software engineer, ZTE corporation
Experienced Ceph developers, Ceph-Leadership-Team member( http://docs.ceph.com/docs/master/governance/#ceph-leadership-team)
  • 1 participant
  • 20 minutes
performance
recolor
reliability
processed
recovering
problem
degraded
method
adaptive
latency
youtube image

24 May 2019

Practices of Ceph Object Storage in Public Cloud Services - Yu Liyang, China Mobile

We talk about the practices of China mobile on ceph object storage, the Multi-site datacenter architecture across 3 cities, the Billing System, new Features(bucket notification, request callback, object soft link) and The difficulties we encountered.

About Yu Liyang
Software Engineer, China Mobile
yuliyang is a Object Storage software engineer, work for China Mobile since 2016.
  • 1 participant
  • 6 minutes
cloud
capacity
servers
service
package
improved
users
meta
chana
data
youtube image

24 May 2019

Practices of Using NFS Over RGW - Enming Zhang, UMCloud

About Enming Zhang
Software Engineer, UMCloud
UMCloud software engineer, Ceph contributor, mainly being engaged in storage products research and development at UMCloud now. Having focused on Ceph RGW development since 2016.
  • 1 participant
  • 6 minutes
rgw
obg
problems
storage
capacity
deleted
nfs
processing
gig
taiga
youtube image

24 May 2019

RADOS Object Class Development in C++ Lua - Noah Watkins, Red Hat

In addition to the standard file, block, and RGW interfaces, Ceph exposes a powerful low-level interface to RADOS objects. This talk will focus on one lesser utilized aspect of this interface called object-classes. Object classes in Ceph allow for the creation of application-specific object interfaces whose implementations execute within the storage system itself. This provides developers with a powerful tool for the construction of transactional interfaces that can utilize remote CPU, memory, and I/O resources within each OSD. This talk will explore how these custom interfaces are generally used, common design patterns, and how developers can get started developing with object classes using C++ and Lua.

About Noah Watkins
Software engineer, Red Hat
Noah has been working on Ceph at Red Hat since he graduated from UC Santa Cruz in June 2018. Noah has presented academic work on storage systems at several conferences and workshops. At Red Hat, Noah's work is focused on orchestration.
  • 3 participants
  • 30 minutes
implemented
server
interfaces
discussed
handling
providing
rgw
lib
chef
ray
youtube image

24 May 2019

RGW S3: Feature Progress, Limitations & Testing - Robin H Johnson, Spaces DigitalOcean & Ali Maredia, Red Hat

What’s new in the world of RGW S3 features & their parity in relation to other S3 providers?
What are the performance costs of S3 features (garbage collection & Bucket Lifecycle)?

This session will cover: development and roadmap in testing S3 compatibility; specification coverage; feature performance (and costs to reach that performance); operational behavior; and war stories testing S3.

As future work, what does large-scale compatibility in the global S3 ecosystem look like? (and how to test it: s3-tests and beyond)

How does deliberate divergence from the S3 specification provide new functionality? (Such as RGW PubSub instead of S3 Bucket Notifications)

About Robin Johnson
DigitalOcean
Senior Engineer, Spaces
Robin presently improves Ceph to fit operational needs for the DigitalOcean public cloud environment, as part of the Spaces product. After many years of focus on Gentoo Linux, Robin explored Ceph after a non-profit deployment revealed deficiencies & problems. While developing solutions, Robin improved other aspects of RGW. This included implementing the S3 Website API & constantly chasing bugs in S3 client implementations.

About Ali Maredia
Red Hat
Software Engineer
Ali Maredia works on Ceph for Red Hat with a focus on object storage. Ali maintains Ceph's S3-tests repository, which is responsible for testing Ceph’s S3 interface. Ali has done work all over Ceph, including in RGW, ceph-ansible, and the testing infrastructure. This spring Ali is mentoring a group of graduate students from Boston University on a project to implement object caching in the RGW. Ali also coordinates Ceph's Google Summer of Code and Outreachy program. Ali got involved with Ceph while working for CohortFS.
  • 7 participants
  • 33 minutes
ws3
s3
aws
rgw
specification
versions
protocol
manifests
services
sjp
youtube image

24 May 2019

RWX Storage for Container Orchestrators with CephFS and Manila - Tom Barron, Red Hat

We'll cover use cases that drive the need for read-write-many (RWX) storage for workloads running in containers orchestrated by Kubernetes, Mesos, Cloud Foundry, etc. and the role of CephFS to meet this need. We'll distinguish among various topologies and business and trust relationships and in light of these discuss where it makes sense to use native CephFS vs CephFS via an NFS gateway, as well as when it is helpful to control these via Manila -- with or without other OpenStack infrastructure -- and when using Manila may not be that helpful. Finally we'll cover current work on CephFS and Manila Container Storage Interface (CSI) plugins and the advantages of CSI over the current generation of in-tree and external cloud storage providers.

About Tom Barron
Red Hat
OpenStack Manila Project Team Leader
I serve as upstream OpenStack Manila Project Team Lead and work for Red Hat where I lead a development team concerned with:

* CephFS integration in Manila and OpenStack
* Manila as supporting service infrastructure for OpenShift/kubernetes and other container orchestrators
* use of Manila in new topologies such as Edge and across clouds

Some talks I have delivered recently:

* Practical CephFS with NFS today using OpenStack Manila (Ceph Day Berlin 2018) [1]
* Manila Project Update (OpenStack Summit Berlin 2018) [2]
* Setting the Compass for Manila RWX Cloud Storage (OpenStack Summit Berlin 2018) [3]
* Distributed File Storage in multi-tenant clouds using CephFS (OpenStack Summit Vancouver 2018) [4]
[1] https://www.slideshare.net/Inktank_Ceph/ceph-day-berlin-practical-cephfs-and-nfs-using-openstack-manila

[2] https://www.openstack.org/videos/summits/berlin-2018/manila-project-update-2

[3] https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22830/setting-the-compass-for-manila-rwx-cloud-storage

[4] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20989/distributed-file-storage-in-multi-tenant-clouds-using-cephfs
  • 1 participant
  • 41 minutes
openstack
openshift
containerization
deployments
gpfs
manila
docker
kubernetes
gateways
orchestrators
youtube image

24 May 2019

Rapid Design and Effective Operating of a General Purpose Object Storage at RWTH Aachen University - Jonas Jansen, RWTH Aachen University

Collaborative, we started our S3 object storage design using Ceph together with four other partners (DUE, RUB, TU Dortmund, Univ. of Cologne), to build a disaster resilient storage. Since this was our first contact with this technology, our primary intention was to enable stakeholders to outline their needs in reliable metrics.

During the process, we added NAS storage resources. We did several setups, resulting in poor performance. Now are evaluating iSCSI gateway and Windows fileservers.

We developed compliance guidelines to ensure prompt delivery and availability within high-security standards and minimized costs.

These focus on:
Resiliency, by geo distributed setup
Efficiency, by erasure coding
Security, via automated testing and patching
Compliance and reduction of (human) workload, by automation and continuous delivery
High availability, by eliminating any single point of failure

About Jonas Jansen
RWTH Aachen, IT Center
IT Systemadministrator
Aachen, Germany
Jonas Jansen is the technical head of the Backup and Archive team at the IT Center. Since 2018, he is managing the object storage project, performed by the Server and Storage (SuS) group (which Backup and Archive is part of). SuS is a mainly hardware focused operation group running all centralized server and storage systems.

His career as system administrator started in 2014. Since 2015, he is with the RWTH Aachen. Within his current position, he is facing the rapidly growing demand for storage and compute resources every day, and meets those demands by frequent research for, and evaluation of new technologies. His tasks include the transfer of those technologies, like OpenStack Cloud or automation tools like Puppet or Ansible, into the daily routines for the whole team. Of course this needs to be done besides maintaining the stability of legacy technologies like tape libraries and modern infrastructure like virtualization clusters.
  • 1 participant
  • 8 minutes
capacity
efficient
storage
configuration
installations
infrastructure
disks
safes
patching
cluster
youtube image

24 May 2019

Rapid Processing of NASA Satellite Data Stored with Ceph - Kevin Hrpcek & Steve Dutcher, University of Wisconsin Space Science and Engineering Center

The NASA VIIRS Atmosphere SIPS, located at the University of Wisconsin, is responsible for assisting the Science Team in algorithm development and production of VIIRS Level-2 Cloud and Aerosol products. To facilitate algorithm development, the SIPS requires access to multiple years of satellite data occupying petabytes of space. Being able to reprocess the entire mission and provide validation results back to the Science Team in a rapid fashion is critical for algorithm development. To accomplish this task the Atmosphere SIPS has deployed a six petabyte Ceph cluster employing numerous different components such as librados, EC-Pools, RBD, and CephFS. This talk will discuss choices we made to optimize the system allowing for rapid reprocessing of years of satellite data.

About Steve Dutcher
University of Wisconsin Space Science and Engineering Center
Data Scientist
Steve Dutcher is a Data Scientist working at the University of Wisconsin-Madison. He graduated in 2000 with a B.S in computer science and began his career at University of Wisconsin, Space Science & Engineering Center. In the early years he worked on airborne instruments flying aboard NASA aircraft such as the DC-8, ER-2, WB-57 and the Global Hawk UAV. The field experiments would fly under satellite overpasses and use the data for validation. He then moved on to working with satellite ground processing systems which involved receiving level-0 data from polar orbiting satellites and processing through level-2. Finally he transitioned to his current role working on a NASA contract as part of the VIIRS Atmosphere SIPS. The Atmosphere SIPS receives global data from multiple satellites and is responsible for producing the operational VIIRS Atmosphere products. This work entails reprocessing petabytes of data, collocating measurements with other satellites and validating results. All of that needs to be done in a rapid fashion in order to provide feedback to the science team aiding them in their development. This has drawn him to constantly be in search for new advancements in the world of high throughput computing in order to help advance Atmospheric Science

About Kevin Hrpcek
UW-Madison Space Science & Engineering Center
Systems Administrator
Kevin is a technology and science enthusiast who is a Systems Administrator for the Space Science and Engineering Center at the University of Wisconsin - Madison. He joined SSEC in 2015 at the beginning of the NASA Atmosphere SIPS contract. Working with a small team of developers he has helped design and build a high throughput computing system for satellite data with Ceph as a core component. This system specializes in the rapid reliable processing and storing petabytes of data for multiple satellites and sensors. The Atmosphere SIPS works closely with NASA funded science teams to aide in the development of algorithms that focus on cloud and aerosol properties. Kevin is also part of the data processing team for MIT Lincoln Laboratory's NASA funded TROPICS project which will consist of 6 cubesats. These projects have given Kevin the opportunity to play a small role in supporting the advancement of science and understanding of the world.
  • 3 participants
  • 36 minutes
nasa
satellites
orbiters
scientists
mission
climate
cloud
project
steve
madison
youtube image

24 May 2019

Rebuilding a Faster OSD with Future - Kefu Chai & Radoslaw Zarzynski, Red Hat

Ceph made its debut over a decade ago. But over the past 10 years, we've been seeing fast development in the storage industry, where the high speed devices are more and more popular. To target these devices, Ceph developers are using a new programming model to rebuild the object storage backend for better scalability and better performance. In this session, Kefu and Radek will explain the problems Ceph is facing, and will show how this team is tackling them. Also, they will have an apple-to-apple comparison between the new OSD with existing OSD. Conceptually OSD is a work-horse daemon exposing host's storage over RADOS protocol.

About Radoslaw Zarzynski
Senior Software Engineer, Red Hat
Radek is a developer focusing on distributed storage. He's now working on Ceph, and is employed by Red Hat.

About Kefu Chai
Senior Software Engineer, Red Hat
Kefu is a developer focusing on distributed storage. He's now working on Ceph, and is employed by Red Hat. | | He co-presented on RADOS in the cephalocon APAC 2018. Also, he presented on the updates on Ceph's migration to Seastar framework in the scylla summit 2018.
  • 4 participants
  • 26 minutes
speed
disk
io
processor
bottlenecked
parallelism
reason
spinner
gigahertz
dataflow
youtube image

24 May 2019

Releasing Ceph - Deep Dive Into Build Infrastructure - Alfredo Deza & Ken Dreyer, RedHat

Building packages for Ceph is a non-trivial task, it involves various steps and several pieces of infrastructure, all acting in unison. It has evolved in the past years into a scalable system that can handle load elastically. From ephemeral build nodes to load balancing repositories, both the development and release packages are able to benefit from this system. This presentation will go into some of the details that makes it robust, extensibility, and some of the difficult problems (some of which are still unsolved!)

About Ken Dreyer
Senior Software Engineer, Red Hat, Inc.
Ken Dreyer is a software engineer working for Red Hat on the Ceph distributed storage system. He handles the release process to ship Ceph in Red Hat's product line: bug triage, build pipelines, and continuously improving the tooling for a smooth process.

About Alfredo Deza
Redhat
Principal Software Engineer
Alfredo Deza is a principal software engineer working for RedHat on the Ceph distributed
storage system, avid open source developer, unit test enthusiast, Vim plugin
author, photographer and former athlete.
As a passionate knowledge-craving developer he can be found giving
presentations in local groups about Python, file systems and storage, and
system administration.
Currently leading the development of ceph-volume, ceph-medic, and the build and release infrastructure for Ceph.
  • 2 participants
  • 35 minutes
security
repository
debian
deployments
responsibility
staff
release
centos
maintainer
gpe
youtube image

24 May 2019

Rook - Running Ceph Using Kubernetes - Alexander Trost & Kim-Norman Sahm, Cloudibility UG

We are going to show how easy it is to use Rook to quickly provision a new Ceph cluster and do second day operations like updating it and using the RBD mirroring feature.
This talk will especially focus on the stabilization for the Rook Ceph part in the v0.9 release.

About Kim-Norman Sahm
Head of Cloud Technology & Executive DevOps Architect,, Cloudibility GmbH
Currently Kim is working as DevOps Architect at Cloudibility in Berlin, formerly as OpenStack Cloud Architect at T-Systems (operational services GmbH) and noris network AG. His favorite technologies are OpenStack, Ceph and K8s.

About Alexander Trost
Cloudibility
DevOps Engineer
Karlsruhe, Germany
Websitehttps://edenmal.moe/
Twitter Tweet
Currently Alexander is working for Cloudibility UG as a DevOps Engineer mostly focused on containerization and the Rook project.

He is a Rook maintainer and works on several smaller Golang projects, such as the Dell Hardware Exporter for Prometheus (galexrt/dellhw_exporter).
He spoke at meetups in Germany, KubeCon NA 2017 and ContainerDays 2018.
  • 4 participants
  • 41 minutes
ruk
kubernetes
safe
provision
kuba
important
repository
managed
gateways
cuba
youtube image

24 May 2019

Rook Deployed Scalable NFS Clusters Exporting CephFS - Patrick Donnelly & Jeff Layton, Red Hat, Inc.

Rook was developed as a storage provider for Kubernetes to automatically deploy and attach storage to pods. Significant effort within Rook has been devoted to integrating the open-source storage platform Ceph with Kubernetes. Ceph is a distributed storage system in broad use today that presents unified file, block, and object interfaces to applications.

This talk will present completed work in the Ceph Nautilus release to dynamically create highly-available and scalable NFS server clusters that export the Ceph file system (CephFS) for use within Kubernetes or as a standalone appliance. CephFS provides applications with a friendly programmatic interface for creating shareable volumes. For each volume, Ceph and Rook cooperatively manage the details of dynamically deploying a cluster of NFS-Ganesha pods with minimal operator or user involvement.

About Jeff Layton
Red Hat
Principal Software Engineer
Raleigh, NC
Jeff Layton is a long time Linux kernel developer specializing in
network file systems. He has made significant contributions to the
kernel's NFS client and server, the CIFS client and the kernel's VFS
layer. Recently, he has taken an interest in Ceph, in particular as a
backend for other network storage protocols.

About Patrick Donnelly
Red Hat, Inc.
Senior Software Engineer
Mountain View, CA
Patrick Donnelly is a senior software engineer at Red Hat, Inc. currently leading the global development team working on the open-source Ceph distributed file system. Patrick has been a speaker at several events presenting recent work on Ceph, including Cephalocon APAC, various Openstack Summits, CERN, and Vault Linux Storage & Filesystems Conference. In 2016 he completed his Ph.D. in computer science at the University of Notre Dame with a dissertation on the topic of file transfer management in active storage cluster file systems.
  • 2 participants
  • 34 minutes
nfs
setups
stored
deploying
daemon
protocols
service
smb
sefa
mvs
youtube image

24 May 2019

Running Backups with Ceph-to-Ceph - Michel Raabe, B1 Systems GmbH

This presentation highlights several different methods for backups inside of Ceph (clusters).
We often receive requests for both local and remote backups. So we would like to introduce backup
methods using external tools as well as some using Ceph's own 'rbd export' or 'rbd-mirror' approaches.

Learn about the pros and cons of each approach, be warned of possible pitfalls both of native use or
OpenStack-based approaches.

About Michel Raabe
Cloud Solution Architect, B1 Systems GmbH
Michel is working for B1 Systems since 2008. He is...
  • 1 participant
  • 29 minutes
backups
backup
backing
servers
openstack
cluster
migrate
terabytes
plan
distance
youtube image

24 May 2019

Testing Ceph for the Cloud, in the Cloud - Adam Wolfe Gordon, DigitalOcean

DigitalOcean, a public cloud provider, has been using Ceph to offer block storage and S3-compatible object storage for nearly three years. For most of that time DigitalOcean has used community releases of Ceph, not finding need to make any modifications. However, as the company's storage infrastructure has scaled and user workloads have changed, the DigitalOcean storage team realized they would need to start modifying and contributing to Ceph. One challenge in doing this was the need to test Ceph changes easily and efficiently, preferably without relying on external environments.

In this talk, Adam Wolfe Gordon will discuss how DigitalOcean automated configuration of cloud-based test environments for Ceph's integration testing tool, Teuthology, and challenges faced in doing so. He will demonstrate how to easily set up a Ceph "lab" on DigitalOcean using Terraform and Ansible automation.

About Adam Wolfe Gordon
DigitalOcean
Sr. Software Engineer
Adam Wolfe Gordon is a software engineer at DigitalOcean, currently working on managed Kubernetes. He previously worked on block storage at DigitalOcean and EMC, implementing everything from user-facing storage management APIs for the cloud to the i/o-path for distributed storage systems, and occasionally contributing to Ceph. Adam has previously spoken at international distributed systems and Go programming language conferences, as well as numerous meetups and company-internal venues. He likes building elegant microservices, continuous deployment, and occasional forays into lower-level software.
  • 1 participant
  • 34 minutes
storage
kubernetes
digitalocean
capacity
provisioning
servers
disks
infrastructure
docker
cloud
youtube image

24 May 2019

Testing Ceph: Status, Development, & Opportunities - Gregory Farnum, Red Hat

Over ten years of deployments, Ceph has proven itself resilient to failures of all kinds. Much of this success can be traced to its “teuthology” automated testing system, which runs thousands of machine-hours of tests every day. This talk will describe the current status of the system and recent developments to improve its shortcomings on technical and community levels. We will also look at Ceph testing more broadly (from PR checks to the wider ecosystem) and identify opportunities to contribute in this high-impact part of the project.

About Gregory Farnum
Principal Software Engineer, Red Hat
Greg Farnum has been in the core Ceph development group since 2009. Now a Red Hat employee, Greg has done major work on all components of the Ceph ecosystem, and currently focuses on testing and the core RADOS system.
  • 1 participant
  • 36 minutes
testing
test
sef
cephalic
currently
process
careful
ptl
thought
steph
youtube image

24 May 2019

Unlimited Fileserver with Samba CTDB and CephFS - Robert Sander, Heinlein Support GmbH

The presentation shows the setup of an unlimited file service cluster running Samba CTDB and using CephFS as the backend storage.
It will show that using clustered Samba can achieve very high redundancy combined with the near limitless storage size of a Ceph storage system.
In addition to Samba the setup also allows to export CephFS via NFS.

About Robert Sander
Heinlein Support GmbH
Senior Linux Consultant
Berlin, Germany
Website: https://heinlein-support.de/
I have been working with Linux since 1995, first as a student at my University, since 2000 as a tool for my job. I have been working as a systems adminstrator before joining Heinlein Support GmbH as a Linux Consultant in 2012.
I do consulting mainly on topics like system monitoring and Ceph with a few additions on LDAP, Samba and Virtualization. My open source activity can be seen through https://github.com/gurubert.
The Ceph Day in Berlin last year was my first international conference to speak at, previously I gathered experience at German Linux Days etc.
  • 1 participant
  • 42 minutes
deployments
servers
setups
vfs
capacities
das
software
awd
interface
managed
youtube image

24 May 2019

Upgrade and Scale-Out an In-production Ceph Cluster on Mixed Arm Micro-server Platform (with OpenStack in Telco) - Aaron Joue, Ambedded Technology

A case sharing on Ambedded's experience to upgrade and scale-out an in-production Ceph Cluster, with its mixed Arm server platform. In this session, Ambedded will share how to overcome the challenges and the way to implement without service impact.

(1) Initial condition: Ceph cluster with OpenStack, running more than 140VMs for varies applications.
(2) Performance & Workload analysis
(3) Problem discovery
(4) Challenges: Migrate and Scale-Out an in-production Ceph cluster on mixed Arm platform
- Change the operating system from Debian to CentOS
- Mixed 32bit & 64bit Arm platform
- Upgrade Ceph version from Jewel/FileStore to Luminous/BlueStore
- Upgrade Ceph management GUI version from Jewel to Luminous
(5) Solution without service impact: alternatives comparison (pros and cons)
(6) The privilege of micro-server architecture while upgrading the Ceph cluster under production.
(7) Q&A

About Aaron Joue
CTO, Ambedded Technology
Founder & Chief Architect SDS Solution at Ambedded Technology Co., Ltd. | Aaron is passionate about promoting Arm microserver architecture and Ceph appliance because of the benefits of the small failure domain, efficiency, density, and power saving.
  • 1 participant
  • 40 minutes
technology
monitors
interface
enterprise
vm
server
embedded
os
safe
taiwan
youtube image

24 May 2019

Using Devops Practices for Operating CEPH - Anders Bruvik, Safespring

Devops is a set of practices that aim to bring together Developers and IT Operations, and the goal is to shorten development lifecycle and bring IT in closer alignment with business goals.

So does it make sense to talk about Devops practitions when the "only" thing we do is operate a storage cluster like ceph? Yes - because modern system operation is becoming more and more about development - infrastructure as code is a term that describes how we increasingly are using development practises to configure IT systems.

In this talk, I will discuss why and how Devops matter for IT operation teams, and I will illustrate by bringing in examples from our experiences in building a distributed hybrid storage cloud in Sweden.

About Anders Bruvik
Safespring
Infrastructure Engineer
Oslo Area, Norway
Websitehttps://bruvik.me
Twitter Tweet LinkedIn Connect
Anders is an infrastructure engineer at the Nordic infrastructure provider Safespring. Before that, he spent years in different technical and management positions at a large Norwegian university where he worked with everything from config management on Unix servers to end-user computing, mobile platforms, project management to virtual desktop infrastructure. When he’s not engineering stuff, he spends time organising DevOpsDays in Oslo and Stockholm.
  • 1 participant
  • 7 minutes
devops
devs
developers
infrastructure
management
department
implementing
pipelines
working
talk
youtube image

24 May 2019

What are “caps”? (And Why Won’t my Client Drop Them?) - Gregory Farnum, Red Hat

CephFS is a powerful file system, but sometimes the performance metrics and error messages developers talk about are abstract and obscure. This talk will illuminate CephFS file capabilities: how they are thoroughly different from the unfortunately-similarly-named CephX caps, their purpose and utility, and why they are a critical performance and correctness issue. Understand the warning messages you may see about clients holding on to them and failing to drop caches and how to read their output. Finally, get a brief lesson in how to program with caps when working in Ceph’s client libraries!

About Gregory Farnum
Principal Software Engineer, Red Hat
Greg Farnum has been in the core Ceph development group since 2009. Now a Red Hat employee, Greg has done major work on all components of the Ceph ecosystem, and currently focuses on testing and the core RADOS system.
  • 6 participants
  • 34 minutes
protocols
files
sis
servers
caps
important
internals
backed
dev
steph
youtube image

24 May 2019

echo “Subject: Update on librmb” | sendmail -v SDS@ceph.com - Danny Al-Gaaf, Deutsche Telekom AG

Deutsche Telekom is running a growing multi-million account email system with billions of emails stored on traditional NFS storage. Last year we introduced librmb to the community, a library to unify email storage based on Ceph object store.

The librmb open source library utilizes RADOS to store emails directly in a Ceph Cluster to achieve maximum performance using parallel I/O from many email gateways for million of active customers at the same time.

At Cephalocon APAC Deutsche Telekom presented on librmb in a very early stage. This talk will provide an update on the current state of the project and give a more in-depth look into implementation and topics like erasure coding, performance, compression and optimizations on a PByte scale PoC cluster.

About Danny Al-Gaaf
Deutsche Telekom AG
Senior Cloud Technologist
Nürnberg Area, Germany
Twitter Tweet LinkedIn Connect
Danny Al-Gaaf is a Senior Cloud Technologist working for Deutsche Telekom. As a Ceph upstream developer he is a driver for using Ceph at Deutsche Telekom. For the last 14 years his professional focus has been on Linux and open source. He works actively in several upstream communities.

Danny presented at several international conferences like Cephalocon, OpenStack Summit, German conferences, and Meetups on Ceph (e.g. Security, High Availability, librmb ...) and OpenStack topics in the last few years.
  • 1 participant
  • 40 minutes
smart
emails
telekom
sms
systems
technology
migrated
tment
germany
news
youtube image