Explore the internals of React-postMessage & Scheduler

Posted Jun 16, 202023 min read

postMessage & Scheduler

Write in front

  • This article contains a certain amount of source code explanation, in which the author has written some content to replace the official comments(that is, it is almost the same as not writing), if the reader prefers the original code,

Move to Official Warehouse , read it together. It is also for this reason that the reading experience on the horizontal screen or PC may be better(the code may need to slide left and right)

  • This article does not explicitly involve React Fiber Reconciler and Algebraic Effects(algebraic effects), but in fact they are closely related. It can be understood that the content of this article is the cornerstone of the first two.

Interested readers can move to "Fiber & Algebraic Effects" to do some pre-reading.

Start

In release of September 27, 2019 last year, React launched a new scheduling task scheme experiment in Scheduler:

  • Old solution:Task scheduling and frame alignment through requestAnimationFrame(hereinafter collectively referred to as cAF, related requestIdleCallback is abbreviated as rIC)* New solution:Schedule tasks via high frequency(short interval) calls postMessage

Emm x1... Suddenly there are a lot of problems
Then this article will explore what happened in this "little" release

Opportunity

By viewing the commit-message of this release, we have summarized the following points:

  1. Because rAF depends on the refresh rate of the display, you need to look at the face of vsync cycle(referring to the frequency of the hardware device) to use rAF
  2. In order to perform as many tasks as possible in each frame, a message eventwith an interval of 5ms is used to initiate scheduling, which is the way of postMessage
  3. The main risk of this scheme is:More frequent scheduling tasks will increase the contention between the main thread and other browser tasks
  4. Compared with rAF and setTimeout, the degree of throttling of the message event under the background tab of the browser needs to be further determined. This test assumes that it has the same priority as the timer.

To put it simply, is to abandon the frame alignment strategy formed by the two APIs of rAF and rIC, and instead artificially control the scheduling frequency, improve the task processing speed, and optimize the performance of React runtime

postMessage

So, what is postMessage? Does it refer to postMessage in the iframe communication mechanism?

Not right, right

Emm x2... Okay, a bit of a riddle

wrong

That's wrong, because postMessage itself is initiated by the object created using the MessageChannel interface

The MessageChannel interface of the Channel Message API allows us to create a new message channel and communicate through the two MessagePort of the channel

This channel is also applicable to Web Workers-so it is very useful...
Let's see how it communicates:

const ch = new MessageChannel()

ch.port1.onmessage = function(msgEvent) {
  console.log('port1 got '+ msgEvent.data)
  ch.port1.postMessage('Ok, r.i.p Floyd')
}

ch.port2.onmessage = function(msgEvent) {
  console.log(msgEvent.data)
}

ch.port2.postMessage('port2!')

//output:
//port1 got port2!
//Ok, r.i.p Floyd.

Very simple, nothing special...
Emm x3...
Ah... Usually it is rarely used directly, how about its compatibility?
![image.png]( https://i0.wp.com/segmentfault.comhttps://cdn.nlark.com/yuque/0/2020/png/109542/1591256355110-099548c5-29ae-459c-b1d5-912e00be222c .png#align=left&display=inline&height=260&margin=%5Bobject%20Object%5D&name=image.png&originHeight=437&originWidth=1023&size=65615&status=done&style=none&width=609 "image.png")

Well! Even though it is 10, IE can be all green!

The compatibility is so good, because the communication between the iframe and the parent document in modern browsers is the message channel used. You can even:

//Assume <iframe id="childFrame" src="XXX" />

const ch = new MessageChannel()
const childFrame = document.querySelector('#childFrame')

ch.port2.onmessage = function(msgEvent) {
  console.log(msgEvent.data)
  console.log('There\'s no father exists ever')
}

childFrame.contentWindow.postMessage('Father I can\'t breathe!','*', [ch.port2])

//output:
//Father I can't breathe
//There's no father exists ever

Okay, we already know what this postMessage is, then let s see how it works

work

Before talking about the operation of postMessage, let me mention Scheduler

Scheduler

Scheduler is a package for transaction scheduling developed by the React team and built into the React project. The vision of the team is to make this package independent of React after the incubation is completed, and become a tool that can be used more widely.
The related content that we will explore next are all within the scope of this package

MessageChannel found

In the source code of Scheduler, by searching for the word postMessage, we easily set our sights on SchedulerHostConfig.default.js file, we intercept part of the content:

In the complete source code, there is an if-else branch that implements two different sets of APIs. For non-DOM or JavaScript environments without MessageChannel(such as JavaScriptCore), the following content is implemented using setTimeout. Interested students can take a look. A fairly simple section of Hack. This article will not go into details and only focus on the source code under the else branch. .
The above is also the reason why this file is called xxxConfig, it is indeed with configuration logic

const performWorkUntilDeadline =() => {
    if(scheduledHostCallback !== null) {
      const currentTime = getCurrentTime();
      //Yield after `yieldInterval` ms, regardless of where we are in the vsync
      //cycle. This means there's always time remaining at the beginning of
      //the message event.
      deadline = currentTime + yieldInterval;
      const hasTimeRemaining = true;
      try {
        const hasMoreWork = scheduledHostCallback(
          hasTimeRemaining,
          currentTime,
       );
        if(!hasMoreWork) {
          isMessageLoopRunning = false;
          scheduledHostCallback = null;
        } else {
          //If there's more work, schedule the next message event at the end
          //of the preceding one.
          port.postMessage(null);
        }
      } catch(error) {
        //If a scheduler task throws, exit the current browser task so the
        //error can be observed.
        port.postMessage(null);
        throw error;
      }
    } else {
      isMessageLoopRunning = false;
    }
    //Yielding to the browser will give it a chance to paint, so we can
    //reset this.
    needsPaint = false;
  };

  const channel = new MessageChannel();
  const port = channel.port2;
  channel.port1.onmessage = performWorkUntilDeadline;

  requestHostCallback = function(callback) {
    scheduledHostCallback = callback;
    if(!isMessageLoopRunning) {
      isMessageLoopRunning = true;
      port.postMessage(null);
    }
  };

The logic of this line of code is actually very simple:

  1. Define a MessageChannel named channel and define a port variable to point to its port2 port
  2. Use the pre-defined performWorkUntilDeadline method as the message event processing function of the port1 port of the channel
  3. Call the postMessage method on the previously defined port variable in the requestHostCallback-that is, the port2 port of the channel-to send the message
  4. The performWorkUntilDeadline method starts operation

Okay, now let s analyze the elements in this small piece of code

requestHostCallback(hereinafter referred to as rHC)

Remember rAF and rIC? The core API of their predecessor scheduling mechanism, so since rHC looks like them, it must be the one on duty now.
Indeed, we directly enter the codeBodyInside taste:

requestHostCallback = function(callback) {
    //Assign incoming callback to scheduledHostCallback
    //Analogous to `requestAnimationFrame(() => {/* doSomething */})`
    //We can infer that scheduledHostCallback is the current task(scheduled)
    scheduledHostCallback = callback;

      //isMessageLoopRunning marks whether the current message loop is on
    //Why is the message loop used? It's just constantly checking for new news-that is, new tasks-
    if(!isMessageLoopRunning) {
      //If the current message loop is closed, rHC has the power to open it
      isMessageLoopRunning = true;
      //After opening, the port2 port of the channel will receive a message, which is to start performWorkUntilDeadline
      port.postMessage(null);
    } //What happens to else?
  };

Well, we now know that the role of rHC is:

  • Prepare the current task to be executed(scheduledHostCallback)
  • Turn on message circular scheduling
  • Call performWorkUntilDeadline

performWorkUntilDeadline

Now it seems that rHC is doing things, performWorkUntilDealine is doing things.
Indeed, we went directly to the codeBodyinternal taste:

const performWorkUntilDeadline =() => {
      //[A]:First check whether the current scheduledHostCallback exists
    //In other words, is there anything to do now
    if(scheduledHostCallback !== null) {
      const currentTime = getCurrentTime();
      //Ah, the deadline!
      //It seems that after the yieldInterval, how much is it?
      //According to the previous content, it should be 5ms, we will verify later
      deadline = currentTime + yieldInterval;
      //Well, the fresh deadline, in other words, how much time is left?
      //With the definition of the remaining time displayed, no matter what node we are in the vsync cycle, there is time when the message(task) is received
      const hasTimeRemaining = true; //The word timeRemaining is reminiscent of rIC
      try {
        //Well, it seems that this scheduledHostCallback is not simple, study it later
        const hasMoreWork = scheduledHostCallback(
          hasTimeRemaining,
          currentTime,
       );
        if(!hasMoreWork) {
            //If the last task is completed, close the message loop and clean up the reference to scheduledHostCallback
          isMessageLoopRunning = false;
          scheduledHostCallback = null;
        } else {
          //[C]:If there are still tasks to be done, use port to continue to send messages to port2 of the channel
          //Obviously, this is a recursive operation
          //So, if there is no more task, obviously will not come here, why should we still judge scheduledHostCallback? Look back
          port.postMessage(null);
        }
      } catch(error) {
        //If the current task is executed in addition to a fault, the next task is entered and an error is thrown
        port.postMessage(null);
        throw error;
      }
    } else {
      //[B]:It s okay, so you don t need to check the messages cyclically.
      isMessageLoopRunning = false;
    }
    //Yielding to the browser will give it a chance to paint, so we can
    //reset this.
    needsPaint = false;
  };

Now it is much clearer, we use a schematic diagram to represent:
![how_postMessage_work.png]( https://i0.wp.com/segmentfault.comhttps://cdn.nlark.com/yuque/0/2020/png/109542/1591326910628-7ac6a881-81cd-4aad-b56f-4a59adb5ad55 .png#align=left&display=inline&height=2047&margin=%5Bobject%20Object%5D&name=how_postMessage_work.png&originHeight=2047&originWidth=3490&size=590979&status=done&style=none&width=3490 "how_postMessage_work.png")
The two dotted arrows indicate the reference relationship, then according to the analysis in the code, we can now know that all task scheduling is initiated by port-that is, the port2 port of the channel-by calling the postMessage method, and this Whether the task is to be executed seems to be related to yieldInterval and hasTimeRemaning, take a look at them:

  • yieldInterval:There are two things in the complete source code:

    //Directly defined as 5ms, not discussed at all
    const yieldInterval = 5

    //but
    //This method is actually a public API provided by the Scheduler package to developers,
    //Allow developers to set the scheduling interval according to different device refresh rates
    //In fact, it is based on local conditions

    forceFrameRate = function(fps) {

        //up to 125 fps
      //My(pretended) 144hz gaming screen was offended
      if(fps <0 || fps> 125) {
        //Using console['error']to evade Babel and ESLint
        console['error'](
          'forceFrameRate takes a positive int between 0 and 125, '+
            'forcing framerates higher than 125 fps is not unsupported',
       );
        return;
      }
      if(fps> 0) {
        yieldInterval = Math.floor(1000/fps);
      } else {
        //Obviously, if there is no pass or a negative pass, it is reset to 5ms, which improves some robustness
        //reset the framerate
        yieldInterval = 5;
      }
    };
  • hasTimeRemaning:Refer to the usual usage of rIC:

    function doWorks() {

    //todo

    }

    function doMoreWorks() {

       //todo more

    }

    function todo() {

        requestIdleCallback(() => {
        //Do things, the most important thing is whether there is time
             if(e.timeRemaining()) {
          doMoreWorks()
        }
     })
     doWorks()

    }

Emm x4... There are also two questions marked in red in the picture above:

  • what happened? : In fact, this place is to provide a new scheduledHostCallback for performWorkUntilDeadline. In this way, performWorkUntilDeadline will "have something to do" until no more tasks are registered through rHC
  • But How? : Next, we will answer the question, everything will start from the Scheduler

Scheduler

Aha, this time we gave Scheduler a bigger title to indicate its protagonist identity ?...
We started directly from the entrance this time and returned step by step to But How? This question goes up

written in front again

  • According to Scheduler's README file, its current API is not the final solution, so the interfaces exposed by its entry file Scheduler.js are all prefixed with unstable_. For simplicity, the following description of the interface name The prefix is omitted
  • The source code also contains some profiling-related logic, which is mainly used to assist debugging and auditing, and does not have much to do with the mode of operation, so the following will ignore these contents, focusing on the interpretation of the core logic

scheduleCallback-give the task to the Scheduler

The starting point of our journey starts from this interface, which is the key to the magic of Scheduler ?~
This interface is used to register a callback function-that is, the task we want to perform-according to the given priority and additional settings into the task queue of the Scheduler, and start Task scheduling:

function unstable_scheduleCallback(priorityLevel, callback, options) {
  var currentTime = getCurrentTime(); //[A]:How does getCurrentTime get the current time?

  var startTime; //Given a start time for the callback function, and delay according to the delay defined in options
  //Given a callback function as a timer, and according to the timeout definition in options to determine whether to use a custom or timeoutForPriorityLevel method to produce a regular time
  //[B]:So what does timeoutForPriorityLevel do?
  var timeout;
  if(typeof options ==='object' && options !== null) {
    var delay = options.delay;
    if(typeof delay ==='number' && delay> 0) {
      startTime = currentTime + delay;
    } else {
      startTime = currentTime;
    }
    timeout =
      typeof options.timeout ==='number'
        ? options.timeout
        :timeoutForPriorityLevel(priorityLevel); //[C]Where did this priorityLevel come from?
  } else {
    timeout = timeoutForPriorityLevel(priorityLevel);
    startTime = currentTime;
  }

  //Define an expiration time, and you will encounter it later
  var expirationTime = startTime + timeout;

  //Ah, from here we can see what a task looks like in Scheduler
  var newTask = {
    id:taskIdCounter++, //Scheduler.js globally defines a taskIdCounter as a taskId producer
    callback,
    priorityLevel,
    startTime,
    expirationTime,
    sortIndex:-1, //[D]:I have seen all of the previous ones. Is this sortIndex used for sorting?
  };
  if(enableProfiling) {
    newTask.isQueued = false;
  }

  if(startTime> currentTime) {
    //Remember the delay attribute in options, which gives the task that the start time is greater than the current time
    //Well, the sortIndex defined above appears again, in this case it is assigned the value of startTime,
    newTask.sortIndex = startTime;
    //[E]:A timer queue(timerQueue) appears here
    //If the start time is greater than the current time, push it into the timer queue
    //Obviously, for the task to be executed in the future, it must be placed in a "to be activated" queue
    push(timerQueue, newTask);
    //The logic here is discussed later, first enter the else branch
    if(peek(taskQueue) === null && newTask === peek(timerQueue)) {
      //All tasks are delayed, and this is the task with the earliest delay.
      if(isHostTimeoutScheduled) {
        //Cancel an existing timeout.
        cancelHostTimeout();
      } else {
        isHostTimeoutScheduled = true;
      }
      //Schedule a timeout.
      requestHostTimeout(handleTimeout, startTime-currentTime);
    }
  } else {
    //expirationTime as the value of sortIndex, logically it can basically confirm that sortIndex is used for sorting
    newTask.sortIndex = expirationTime;
    //[F]:There is a push method again, this time pushing the task into the task queue(taskQueue), it seems that the timer queue and the task queue are isomorphic?
    push(taskQueue, newTask);
    if(enableProfiling) {
      markTaskStart(newTask, currentTime);
      newTask.isQueued = true;
    }
    //Logically, here is to judge whether the current is in the process, that is, whether performWorkUntilDeadline is in a recursive execution state, if it is not, then open the schedule
    //[G]:Emm x5... What does this flushWork do?
    if(!isHostCallbackScheduled && !isPerformingWork) {
      isHostCallbackScheduled = true;
      requestHostCallback(flushWork);
    }
  }

  return newTask;
}

Ok, let's break down some of the problems marked with "[X]" in the above comment, to make the function more three-dimensional:

  • A: How does getCurrentTime get the current time?

    • Solution:In the aforementioned schedulerHostConfig.default.js file, according to whether the performance object and the performance.now method exist, it is distinguished whether to use Date.now or performance.now to obtain the current Time, the reason is that the latter is more accurate and absolute than the former. For details, please refer to here
  • B C: Let s take a look directly at the content of the timeoutForPriorityLevel method in Scheduler.js:

    //...other code
    var maxSigned31BitInt = 1073741823;

    /**

    • The following variables are globally defined, equivalent to system constants(environment variables)

    • /
      //execute immediately
      //Obviously, if deley is not defined, according to the logic immediately following the [B]comment, expirationTime is equal to currentTime-1
      var IMMEDIATE_PRIORITY_TIMEOUT = -1;
      //If you go further, you will definitely enter the else branch and push it to the task queue to immediately enter performWorkUntilDealine
      var USER_BLOCKING_PRIORITY_TIMEOUT = 250;
      var NORMAL_PRIORITY_TIMEOUT = 5000;
      var LOW_PRIORITY_TIMEOUT = 10000;
      //The lowest priority looks like it will never be timeout, see later when it will be executed
      var IDLE_PRIORITY_TIMEOUT = maxSigned31BitInt;

      //...other code

      //As you can see, priorityLevel is obviously also constant by the system
      function timeoutForPriorityLevel(priorityLevel) {
      switch(priorityLevel) {
      case ImmediatePriority:

      return IMMEDIATE_PRIORITY_TIMEOUT;

      case UserBlockingPriority:

      return USER_BLOCKING_PRIORITY_TIMEOUT;

      case IdlePriority:

      return IDLE_PRIORITY_TIMEOUT;

      case LowPriority:

      return LOW_PRIORITY_TIMEOUT;

      case NormalPriority:
      default:

      return NORMAL_PRIORITY_TIMEOUT;

      }
      }

      //...other code

Among them, priorityLevel is defined in schedulerPriorities.js, which is very intuitive:

export type PriorityLevel = 0 | 1 | 2 | 3 | 4 | 5;

//Aha, it may be implemented with symbols in the future,
//In that case, does the comparison of size have to abstract another rule?
//TODO:Use symbols?
export const NoPriority = 0;
export const ImmediatePriority = 1;
export const UserBlockingPriority = 2;
export const NormalPriority = 3;
export const LowPriority = 4;
export const IdlePriority = 5;

It seems that the timing of task execution is determined by current time(currentTime) + delay(delay) + priority timing(XXX_PRIORITY_TIMEOUT), and the increment of timing duration is determined by shedulerPriorities.js To determine the value of

  • C D E: These three points are very related, so put them together directly

    • sortIndex: is the sorting index. According to the previous content and the explanation of [B], we can know that the value of this attribute is either startTime or expirationTime, which is obviously more The sooner the younger Well-therefore, sorting with this value will inevitably prioritize the task
    • timerQueue and taskQueue: harmful, sortIndex must be used to sort in these two homogeneous queues. _Seeing this, students who are familiar with the data structure should have guessed that the data structure of these two queues may be the standard solution for handling priority transactions-the smallest priority queue._

Sure enough, we traced back to the push method in a file called schedulerMinHeap.js, and the minimum priority queue is based on the minimum heap(min-heap) to achieve. **We will see what push does to this queue later.

  • F:flushWork! ~ Listening to this name is very smooth, right~ This name has told us very well that it is to deal with all the current tasks one by one! How does it work? Leave a suspense, jump out of scheduleCallback

Minimum heap

The minimum heap is essentially a complete binary tree. After sorting, the element values of all its non-terminal nodes are not greater than its left and right nodes, as follows:
![min-heap.png]( https://i0.wp.com/segmentfault.comhttps://cdn.nlark.com/yuque/0/2020/png/109542/1591867089199-5e4a995d-ac38-4011-b77e -faa71a14b6df.png#align=left&display=inline&height=381&margin=%5Bobject%20Object%5D&name=min-heap.png&originHeight=1237&originWidth=1717&size=164147&status=done&style=none&width=529 "min-heap.png")

Principle

Sheduler uses an array to implement this minimum heap, now we simply analyze how it works

PUSH

We push an element with a value of 5 into the minimum heap above, and its workflow is as follows:
![min-heap-push.png]( https://i0.wp.com/segmentfault.comhttps://cdn.nlark.com/yuque/0/2020/png/109542/1591867520891-957303ed-17a8-424a -a986-b8dd0afabe02.png#align=left&display=inline&height=348&margin=%5Bobject%20Object%5D&name=min-heap-push.png&originHeight=1965&originWidth=3588&size=454458&status=done&style=none&width=636 "min-heap-pushpng)
It can be seen that during the push, the siftUp method is called to arrange the element with the value of 5 to the position we want, which becomes the tree on the right. The relevant code is as follows:

type Heap = Array<Node>;
type Node = {|
  id:number,
  sortIndex:number,
|};

export function push(heap:Heap, node:Node):void {
  const index = heap.length;
  heap.push(node);
  siftUp(heap, node, index);
}

function siftUp(heap, node, i) {
  let index = i;
  while(true) {
    const parentIndex =(index-1) >>> 1;
    const parent = heap[parentIndex];
    if(parent !== undefined && compare(parent, node)> 0) {
      //The parent is larger. Swap positions.
      heap[parentIndex]= node;
      heap[index]= parent;
      index = parentIndex;
    } else {
      //The parent is smaller. Exit.
      return;
    }
  }
}

function compare(a, b) {
  //Compare sort index first, then task id.
  const diff = a.sortIndex-b.sortIndex;
  return diff !== 0? diff:a.id-b.id;
}

It can be seen that the calculation of the position of the parent node in siftUp also uses the shift operator(>>>1 is equivalent to dividing by 2 and then tailing) for optimization to improve the calculation efficiency

POP

Then, we need to take an element from it(used to schedule a task to execute in Scheduler), the workflow is as follows:
![min-heap-pop.png]( https://i0.wp.com/segmentfault.comhttps://cdn.nlark.com/yuque/0/2020/png/109542/1591935387886-65035aab-95f9-428b -96ab-7f37ab2c0c55.png#align=left&display=inline&height=360&margin=%5Bobject%20Object%5D&name=min-heap-pop.png&originHeight=2123&originWidth=3581&size=483244&status=done&style=none&width=608 "min-heap-poppng)
When we take out the first element that is, with the smallest value and the highest priority the tree loses the top, and it is necessary to reorganize its branch and leaf structure, and the siftDown method is used to reorganize the remaining elements so that they are still Keep it as a minimum heap, the relevant code is as follows:

export function pop(heap:Heap):Node | null {
  const first = heap[0];
  if(first !== undefined) {
    const last = heap.pop();
    if(last !== first) {
      heap[0]= last;
      siftDown(heap, last, 0);
    }
    return first;
  } else {
    return null;
  }
}

function siftDown(heap, node, i) {
  let index = i;
  const length = heap.length;
  while(index <length) {
    const leftIndex =(index + 1) * 2-1;
    const left = heap[leftIndex];
    const rightIndex = leftIndex + 1;
    const right = heap[rightIndex];

    //If the left or right node is smaller, swap with the smaller of those.
    if(left !== undefined && compare(left, node) <0) {
      if(right !== undefined && compare(right, left) <0) {
        heap[index]= right;
        heap[rightIndex]= node;
        index = rightIndex;
      } else {
        heap[index]= left;
        heap[leftIndex]= node;
        index = leftIndex;
      }
    } else if(right !== undefined && compare(right, node) <0) {
      heap[index]= right;
      heap[rightIndex]= node;
      index = rightIndex;
    } else {
      //Neither child is smaller. Exit.
      return;
    }
  }
}

Emm x5... and PUSH part of the code merged, is a minimum heap standard implementation
The rest, the SchedulerMinHeap.js source code also provides a peek(look at) method for viewing the top element:

export function peek(heap:Heap):Node | null {
  const first = heap[0];
  return first === undefined? null:first;
}

Its role is obviously to take the first element out peek peek ~ we will encounter it soon

flushWork

Now, let's see how the Scheduler flushes all tasks:

function flushWork(hasTimeRemaining, initialTime) {
  if(enableProfiling) {
    markSchedulerUnsuspended(initialTime);
  }

  //[A]:Why do you want to reset these states?
  isHostCallbackScheduled = false;
  if(isHostTimeoutScheduled) {
    //We scheduled a timeout but it's no longer needed. Cancel it.
    isHostTimeoutScheduled = false;
    cancelHostTimeout();
  }

  //[B]:Logically, if the task itself does not throw an error, flushWork returns the result of workLoop, so what does workLoop do?
  isPerformingWork = true;
  const previousPriorityLevel = currentPriorityLevel;
  try {
    if(enableProfiling) {
      try {
        return workLoop(hasTimeRemaining, initialTime);
      } catch(error) {
        if(currentTask !== null) {
          const currentTime = getCurrentTime();
          markTaskErrored(currentTask, currentTime);
          currentTask.isQueued = false;
        }
        throw error;
      }
    } else {
      //This official comment has been deliberately left, which tells us that in a production environment, flushWork will not catch errors thrown in workLoop,
           //Because in development mode or during debugging, this kind of error will generally cause a white page and give the developer a hint. Obviously this function cannot affect the user
      //No catch in prod codepath.
      return workLoop(hasTimeRemaining, initialTime);
    }
  } finally {
    //If the task execution fails, the current scheduling work is terminated
    currentTask = null;
    currentPriorityLevel = previousPriorityLevel;
    isPerformingWork = false;
    if(enableProfiling) {
      const currentTime = getCurrentTime();
      markSchedulerSuspended(currentTime);
    }
  }
}

Now let's analyze the ABC in this code~

  • A:Why reset these states?

Because rHC does not necessarily execute the incoming callback function immediately, the isHostCallbackScheduled state may be maintained for a period of time; when flushWork starts to process the task, you need to release the state to support other tasks being scheduled in; isHostTimeoutScheduled The same is true, as to what timeout this is, we will soon encounter

  • B: workLoop, Emm x6... It's almost the end of this journey. Just like the pit filling of serial novels, this method will answer many questions

workLoop

As the name implies, this method must contain a loop for processing tasks, so what happens in this loop?

function workLoop(hasTimeRemaining, initialTime) {
  let currentTime = initialTime;
  //[A]:What does this method do?
  advanceTimers(currentTime);
  //peek at the top task in the task queue
  currentTask = peek(taskQueue);
  //As long as currentTask exists, this loop will continue
  while(
    currentTask !== null &&
    !(enableSchedulerDebugging && isSchedulerPaused)
 ) {
    if(
      currentTask.expirationTime> currentTime &&
     (!hasTimeRemaining || shouldYieldToHost())
   ) {
      //dealine has arrived, but the current task has not expired, so let it be executed in the next scheduling cycle
      //[B]:How should shouldYieldToHost judge?
      break;
    }
    const callback = currentTask.callback;
    if(callback !== null) {
      //callback is not null, it means the current task is available
      currentTask.callback = null;
      currentPriorityLevel = currentTask.priorityLevel;
      //Determine if the current task is overdue
      const didUserCallbackTimeout = currentTask.expirationTime <= currentTime;
      markTaskRun(currentTask, currentTime);
      //[C]:continuationCallback? What does it mean? Let the task continue?
      const continuationCallback = callback(didUserCallbackTimeout);
      currentTime = getCurrentTime();
      if(typeof continuationCallback ==='function') {
          //It seems that if continuationCallback is established, it will be used to replace the current callback
        currentTask.callback = continuationCallback;
        markTaskYield(currentTask, currentTime);
      } else {
        if(enableProfiling) {
          markTaskCompleted(currentTask, currentTime);
          currentTask.isQueued = false;
        }
        //If continuationCallback is not established, the current task will be popped out,
        //Logically, it should be determined that the current task has been completed
        //Emm x7... Then the tasks that schedule comes in should actually follow this rule
        //[D]:We will emphasize this issue later
        if(currentTask === peek(taskQueue)) {
          pop(taskQueue);
        }
      }
      //advanceTimers is here again...
      advanceTimers(currentTime);
    } else {
      //If the current task is no longer available, pop it off
      pop(taskQueue);
    }
    //peek out a task from taskQueue again
    //Note that if the previous continuationCallback is established, taskQueue will not pop.
    //So the task from peek is still the current task, but the callback is already the continuationCallback
    currentTask = peek(taskQueue);
  }
  //Bingo! Isn't that just checking if there are more tasks?
  //Finally, the hasMoreWork in performWorkUntilDealine is logical!
  if(currentTask !== null) {
    return true;
  } else {
    //[E]:Hey, it doesn t seem so simple here, what did you do?
    const firstTimer = peek(timerQueue);
    if(firstTimer !== null) {
      requestHostTimeout(handleTimeout, firstTimer.startTime-currentTime);
    }
    return false;
  }
}

We finally answered the previous But How question
Now, let s parse the ABC in the above code to see how this loop works

  • A:The above code appears twice in advanceTimers, what is it used for? A look at the code will know:

    function advanceTimers(currentTime) {

    //In fact, the following official comment is already very clear, that is, the tasks queued in timerQueue are transferred to taskQueue as needed
    //Check for tasks that are no longer delayed and add them to the queue.
    let timer = peek(timerQueue);
    while(timer !== null) {
      if(timer.callback === null) {
        //Timer was cancelled.
        pop(timerQueue);
      } else if(timer.startTime <= currentTime) {
        //Timer fired. Transfer to the task queue.
        pop(timerQueue);
        timer.sortIndex = timer.expirationTime;
        push(taskQueue, timer);
        if(enableProfiling) {
          markTaskStart(timer, currentTime);
          timer.isQueued = true;
        }
      } else {
        //Remaining timers are pending.
        return;
      }
      timer = peek(timerQueue);
    }

    }

In fact, this code is quite simple. It is based on startTime and currentTime to determine whether a timer has reached the execution time, and then transfer it to taskQueue, which can be roughly summarized as follows:
![advanceTimers.png]( https://i0.wp.com/segmentfault.comhttps://cdn.nlark.com/yuque/0/2020/png/109542/1592101169090-39e2ce01-8341-473a-b199-14cda7dfe94e .png#align=left&display=inline&height=563&margin=%5Bobject%20Object%5D&name=advanceTimers.png&originHeight=2372&originWidth=2119&size=339487&status=done&style=none&width=503 "advanceTimers.png")
Therefore, the first time it is called in workLoop is to reorganize the tasks that need to be performed currently;
while while taskQueue

  • BshouldYieldToHost hasTimeRemaning break while ** 5ms ** shouldYieldToHost

    if(

    enableIsInputPending &&
    navigator !== undefined &&
    navigator.scheduling !== undefined &&
    navigator.scheduling.isInputPending !== undefined

    ) {

    const scheduling = navigator.scheduling;
    shouldYieldToHost = function() {
      const currentTime = getCurrentTime();
      if(currentTime >= deadline) {
        //There's no time left. We may want to yield control of the main
        //thread, so the browser can perform high priority tasks. The main ones
        //are painting and user input. If there's a pending paint or a pending
        //input, then we should yield. But if there's neither, then we can
        //yield less often while remaining responsive. We'll eventually yield
        //regardless, since there could be a pending paint that wasn't
        //accompanied by a call to `requestPaint`, or other main thread tasks
        //like network events.
        //                                          
        //                                          
        //                                         
        //        `requestPaint`                               
        //                   
        if(needsPaint || scheduling.isInputPending()) {
          //There is either a pending paint or a pending input.
          return true;
        }
        //There's no pending input. Only yield if we've reached the max
        //yield interval.
        return currentTime >= maxYieldInterval;
      } else {
        //There's still time left in the frame.
        return false;
      }
    };
    
    requestPaint = function() {
      needsPaint = true;
    };

    } else {

    //`isInputPending` is not available. Since we have no way of knowing if
    //there's pending input, always yield at the end of the frame.
    shouldYieldToHost = function() {
      return getCurrentTime() >= deadline;
    };
    
    //Since we yield every frame regardless, `requestPaint` has no effect.
    requestPaint = function() {};

    }

      `navigator.scheduling`       React              **   **   **  **             React [      Scheduling     ](https://reactjs.org/docs/design-principles.html#scheduling)     
                 else                    deadline  

    API requestPaint

  • C continuationCallback continuationCallback = callback(didUserCallbackTimeout) continuationCallback React Fiber Reconciler callback typeof continuationCallback === 'function' callback advanceTimers

  • D callback continuationCallback Scheduler React Scheduler

  • E timer requestHostTimeout handleTimeout

    requestHostTimeout SchedulerHostConfig.default.js ?

    // eventloop
    requestHostTimeout = function(callback, ms) {

    taskTimeoutID = setTimeout(() => {
      callback(getCurrentTime());
    }, ms);

    };

    // cancel clear taskTimoutID
    cancelHostTimeout = function() {
    clearTimeout(taskTimeoutID);
    taskTimeoutID = -1;
    };

    handleTimeout Scheduler.js

    function handleTimeout(currentTime) {
    isHostTimeoutScheduled = false;
    // task
    advanceTimers(currentTime);

    // isHostCallbackScheduled true
    //
    //
    if(!isHostCallbackScheduled) {

    //flush     taskQueue    
    if(peek(taskQueue) !== null) {
      //       advanceTimer    taskQueue push    
      //     flush   
      isHostCallbackScheduled = true;
      requestHostCallback(flushWork);
    } else {
      //  taskQueue                 
      //     timerQueue       
      //                             
      const firstTimer = peek(timerQueue);
      if(firstTimer !== null) {
        //startTime - currentTime     XXX_PRIORITY_TIMEOUT     
        requestHostTimeout(handleTimeout, firstTimer.startTime - currentTime);
      }
    }

    }
    }

    workLoop ...

            workLoop        

    workLoop.png
    Emm x7...
    Scheduler

                cancel                                          [ ](https://github.com/facebook/react/tree/master/packages/scheduler/src)          

to sum up

postMessage

   `performWorkUntilDeadline`                ** -  - **              

?

  • taskQueue

  • timerQueue

                                 `timeoutForPriorityLevel`    
           `flushWork`        while    `workLoop`                                 timerQueue     `advanceTimers`   taskQueue                                                  `continuationCallback`    
                                      `needsPainiting & isInputPending`                       `navigator.scheduling`     

scheduler.png
... ...
rIC rAF postMessage React
Scheduler
Emm x8...
Scheduler ~

                  ?