Ruby 3.0 and the new FiberScheduler interface

Posted on December 28, 2020 by wjwh


A few days ago on Christmas day 2020, Matz released Ruby 3.0. Like every year, a host of interesting new features was included with the new version. Most articles I have read so far focus more on the new ways to introduce type hints and the Ractor system but for me the most interesting addition was the introduction of the Fiber::SchedulerInterface class. It allows for (but does not yet implement) more advanced event loop based schedulers for non-blocking I/O in Ruby. Several advanced techniques already exist for this in Ruby, from event loop frameworks like EventMachine and Async to releasing the GVL in C extensions, but this new interface is more exciting to me because it makes it much easier to accidentally do the right thing.

In this post I’ll go over the way the scheduler interface works in “normal” Ruby, and how to access it from within MRI C extensions. We’ll also have a look at the drawbacks of the current interface, because nothing is perfect.

Nonblocking I/O with Fibers

In MRI Ruby, a Fiber is a primitive for implementing light weight cooperative concurrency. They are somewhat like traditional threads in that they take a block and run it concurrently with other fibers, but they live with many fibers “inside” a thread. This means that at most one fiber per thread can run at a time, but because they use very little memory it is feasible to create hundreds of thousands of fibers without problems. If you have very computationally intensive tasks for your fibers, this does not bring any benefits since dividing the work into many small parts does not help if you don’t have enough capacity for it in the first place. However, many Ruby processes spend much of their time waiting on I/O, such as waiting for the responses of API and database calls or waiting to read more of a HTTP request from a network socket. This is the use case where fibers shine.

More “traditional” systems manage all this waiting around by allocating a separate operating system (OS) thread for every separate request, then make blocking system calls for reading from and writing to sockets and files. This works well up to a point, but an OS thread is relatively expensive to create and they require a context switch to the OS whenever another thread needs to run. This leads to a lot of overhead. Fibers can leverage the facilities for non-blocking and asynchronous I/O that modern operating systems provide to skip a lot of this overhead. They do this by calling yield whenever they realize they will not be able to make progress, giving the floor to another fiber. Some examples might be when Kernel#sleep is called or whenever a read() or write() syscall returns the EWOULDBLOCK or EAGAIN error codes. The missing link in this story is the new fiber scheduler, which is the code that a fiber yields to. The scheduler is responsible for maintaining a inventory of blocked fibers and resume-ing those fibers when the reason why they were blocked disappears. For example, if a fiber was blocked because it called sleep(10), then after 10 seconds it should be resumed again. If the fiber was blocked because no data was available on the socket it wanted to read from, it should be resumed as soon as data arrives.

The scheduler is allowed to any mechanism to achieve this, but in practice there are a couple of good options:

Choosing between these possible mechanisms and managing the conversion between Ruby objects and what the OS demands is the task of the scheduler. Since fibers are thread-local and cannot move between threads, so is the scheduler. It would theoretically be possible to have an epoll() based scheduler for one thread and an io_uring based one for another, but in practice the scheduler would probably be provided by a separate gem and automatically choose the most performant interface available for the current OS.

To make the integration seamless for Ruby developers, in Ruby 3.0 and onwards all relevant standard library methods have been patched to yield to the scheduler whenever they encounter a situation where they will block the current fiber. These methods include, at the time of writing, Kernel.sleep, IO#wait_readable, IO#wait_writable, IO#read, IO#write and other related methods (e.g. IO#puts, IO#gets), Thread#join, ConditionVariable#wait, Queue#pop, SizedQueue#push. This yielding to the scheduler only happens when a scheduler has actually been defined for the thread with Fiber.set_scheduler. The real power of this becomes apparent when you realize that this means any method in any gem that eventually calls IO#read or IO#write can make use of the scheduler, whether that gem has been written with the scheduler in mind or not. This immediately makes many gems like database drivers and network libraries scheduler-aware, as long as they don’t use C extensions too heavily. Even if they do, not all is lost. We’ll look at integrating C extensions with the scheduler later.

A non-blocking IO example

Let’s look at a very simple example from the comprehensive release notes:

start = Time.now

Thread.new do # in this thread, we'll have non-blocking fibers
  Fiber.set_scheduler Scheduler.new

  %w[2.6 2.7 3.0].each do |version|
    Fiber.schedule do # Runs block of code in a separate Fiber
      t = Time.now
      # Instead of blocking while the response will be ready, the Fiber
      # will invoke scheduler to add itself to the list of waiting fibers
      # and transfer control to other fibers
      Net::HTTP.get('rubyreferences.github.io', "/rubychanges/#{version}.html")
      puts '%s: finished in %.3f' % [version, Time.now - t]
    end
  end
end.join # At the END of the thread code, Scheduler will be called to dispatch
         # all waiting fibers in a non-blocking manner

puts 'Total: finished in %.3f' % (Time.now - start)
# Prints:
#  2.6: finished in 0.139
#  2.7: finished in 0.141
#  3.0: finished in 0.143
#  Total: finished in 0.146

This example sets the scheduler for the new thread to a new instance of some Scheduler class and then uses Fiber.schedule to send off three HTTP requests that fetch the release notes for several recent Ruby versions. We can see in the output that the total time taken is only a few milliseconds longer than the slowest response, indicating that all three HTTP requests were performed in parallel.

How does this work, given that we can see none of the methods mentioned above being used in the example? Well, Net::HTTP.get uses IO#write and IO#read in its implementation, so while a request is in flight and waiting for a response, its fiber will yield back to the scheduler to let other fibers do work. Since the actual CPU work here is very low, many fibers can be run on the same scheduler without them getting in each other’s way too much.

Integrating the Fiber scheduler into C extensions

Having functions like IO#read and Kernel.sleep automatically using the Fiber scheduler is all well and good, but many useful Ruby gems use C extensions to drop the GVL, perform operations “underneath” the Ruby garbage collector or just for plain speed of execution of computationally expensive operations. These C extensions will typically not use the Ruby methods for writing to and reading from file descriptors, so if they wish to use an existing fiber scheduler to efficiently use non-blocking I/O they have to call it themselves. Luckily, this is not hard!

A (very) quick recap of C extensions for MRI: a Ruby object is represented in C by a VALUE struct that contains all the relevant information about the object. A few “standard” values like true, false and nil and some others have been predefined and are available to test against as Qnil, Qtrue, etc. Ruby methods can be called either directly in C (if the Ruby method itself was defined in C) or with the rb_funcall() C function.

Using the scheduler is obviously only possible if it has been defined for the current thread with Fiber#set_scheduler. To check if this is the case in a C extension, we can use the rb_scheduler_current() C function, which will return a VALUE containing either the current scheduler or Qnil if it hasn’t been set. If the scheduler is set, it is possible to call with one of the available functions that are defined in scheduler.c in the main folder of the Ruby repo. Finally as a complete example, let’s look at how the Ruby standard library implements this pattern:

int rb_io_wait_readable(int f)
{
    VALUE scheduler = rb_scheduler_current();
    if (scheduler != Qnil) {
        return RTEST(
            rb_scheduler_io_wait_readable(scheduler, rb_io_from_fd(f))
        );
    }

    // rest of function for if no scheduler was defined
}

This snippet also demonstrates how to get a Ruby IO object out of a file descriptor: with rb_io_from_fd().

Drawbacks of Fibers

The current implementation does still have some drawbacks. For one, fibers are thread-local and cannot be moved between threads (yet). (For the purposes of this paragraph, you can also read “Ractor” wherever I wrote “thread”.) This means that any system that wants to scale beyond a single thread will have to run multiple threads, each with their own scheduler. If there is an imbalance between the duration of the fiber tasks, it is possible that some threads will have run through their entire work queue and be idle while other threads still have a surplus of work. This would limit the throughput of the system since not all resources are properly utilized. Languags like Haskell and Go solve this issue by “work stealing”, where threads without any work to do can “steal” work from the run queues of other schedulers. This issue can be worked around somewhat by having a global Queue (or intermediate “pipe” Ractor) that distributes work items, but this only works for coarser items like entire HTTP requests or background jobs. Once a request or background job has been started on a certain thread it will stay there forever.

Another drawback (for POSIX systems only) is that “non-blocking” I/O can actually still block sometimes! While you can set any file descriptor to non-blocking mode, only those file descriptors that represent sockets and pipes will ever return EWOULDBLOCK. A file descriptor representing an actual file on a filesystem will never return EWOULDBLOCK, even if the file is on a network filesystem on the other side of the world and will very much block. This is more a POSIX problem and not unique to Ruby, but is still something to watch out for. The fiber scheduler C interface file does provide for asynchronous read and write operations which could be backed by “real” asynchronous I/O operations like io_uring on Linux or IO Completion Ports on Windows. However those functions are as of yet undocumented, so it is unclear if and how they are meant to be used. There are also many more syscalls that can block (such as unlink()-ing a file on a slow filesystem) that are not yet covered by the available asynchronous operations.

Finally, the main drawback of the fiber scheduling mechanisms is that at the time of writing (end of Dec 2020) no production-ready fiber schedulers have been released yet! I am aware of the evt library but could not get it to install, although that might be because the liburing library on my machine is broken at the moment due to work on another project involving it. Some people on Reddit were also speculating that a scheduler based on the async framework might be in the works, but no news seems to be available as of yet. Lastly, there is a toy scheduler based on IO.select available in the test suite of MRI but it seems to be broken as it can try to resume fibers that have already finished. The current unavailability of a good scheduler also means that at the moment we cannot have a default scheduler already initialised on program startup, it must be imported from a gem and activated with Fiber#set_scheduler.

Conclusion

I think the interface for fiber schedulers is extremely interesting. It enables highly scalable IO-bound systems in Ruby, while still providing developers with a nice programming model free of callbacks and async/await style “colored” functions. You simply write your fiber code in the usual straightforward way and the scheduler will manage all your fibers in an efficient manner, parking them somewhere out of the way when they block and retrieving them as soon as they can resume again. A very Ruby-like approach, designed to keep programmers happy.

In this first iterations there are still several properties of the system that could be improved upon, such as “work stealing” of fibers between threads and/or Ractors, the addition of a default scheduler, and expansion of the available asynchronous operations. The available documentation is also not very thorough yet. Still, as these systems get used more widely, we will see most of the issues shake out. It will be interesting to see how Ruby evolves!