Implementation and scheduling of time slicing (original 26,000 words)

Posted May 27, 202021 min read

I am a vegetable chicken who is accustomed to Vue, and it happens to be ridiculing with the gangster on the weekend. Fenghuan turned to fiber and was despised by the gangster ...

The boss also told me that I forgot the environment now

  1. Baidu is not credible, the ads that Baidu finds are all from the same author(most of the cases are true)
  2. A lot of hydrology is produced in the form of copy. The article you read may be outdated in several versions(in most cases this is true)

So this course started the process of reading source code related to React Fiber. Why look at Fiber? Because there is no Vue, nor Vue3, but it is amazingly blown.

This dish was written at:2020/05/25, the reference source code version of the day v16.13.1

What problem does Fiber appear to solve?

First of all, we must know why Fiber appears

Synchronous update of the old version of React:When React decides to load or update the component tree, it will do a lot of things, such as calling the life cycle functions of each component, calculating and comparing Virtual DOM, and finally updating the DOM tree.

Here is a chestnut:it takes 1 millisecond to update a component. If you want to update 1000 components, it will take 1 second. During this 1 second update process, the main thread is all focused on running the update operation.

The browser redraws the current page every certain interval. Generally this frequency is 60 times per second. In other words, every 16 milliseconds(1/60 0.0167) the browser will have a periodic redrawing behavior, which we call a frame every 16 milliseconds. What does the browser do in this frame of time:

  1. Execute JS.
  2. Calculate the Style.
  3. Build a layout model(Layout).
  4. Draw the layer style(Paint).
  5. Combine calculation and rendering results(Composite).

If any one of these six steps takes too long, after the total time exceeds 16ms, the user may be able to see the lag. While the above chestnut components update synchronously takes 1 second, which means that almost the user freezes for 1 second! ! !(Almost--!)

Because of the single-threaded nature of JavaScript, each synchronization task does not consume too much energy, otherwise the program will not respond to other inputs. React's update process is a violation of this taboo, and React Fiber is about changing the status quo.

What is Fiber

One of the solutions to solve the synchronous update is time slicing:fragment the update process and divide a time-consuming task into many small pieces. Perform non-blocking rendering, update applications based on priority, and pre-render content in the background.

Fiber is a work unit controlled by the performUnitOfWork(ps:described in detail later) method. As a data structure, it is used to represent some workers, in other words, it is a work unit, through the Fiber s architecture , Provides a convenient way to track, schedule, suspend and suspend work.

Fiber creation and use process:

  1. The data from each React element returned by the render method is merged into the fiber node tree
  2. React creates a fiber node for each React element
  3. Unlike the React element, the fiber will not be recreated every time during the rendering process
  4. In subsequent updates, React reuses the fiber node and uses the data from the corresponding React element to update the necessary attributes.
  5. At the same time, React will maintain a workInProgressTree for calculation update(double buffering), which can be regarded as a tree representing the current work progress. There is also an old tree that represents the rendered interface. React builds the WIP tree while comparing with the old tree. alternate points to the equivalent node of the old tree.

PS:The workInProgress mentioned above belongs to the beginWork process. If you want to write it down, it will be doubled. This will not be explained in detail ...(mainly I am lazy and dish ...)

Fiber's architecture is divided into two main stages:reconciliation(coordination) / render and commit,

Reconciliation phase of React

The Reconciliation phase is not much different from the old version after the fiber reconstruction, except that it will not be recursively compared, and the changes will not be submitted immediately.

Involving life hooks

  • shouldComponentUpdate
  • ~ componentWillMount ~(obsolete)
  • ~ componentWillReceiveProps ~(obsolete)
  • ~ componentWillUpdate ~(obsolete)
  • static getDerivedStateFromProps

reconciliation features:

  • It can be interrupted. If the time slice runs out during the coordination phase, React will choose to give up control. Because the work performed during the coordination phase does not result in any user-visible changes, there is no problem in giving up control at this stage.
  • Because the coordination phase may be interrupted, resumed, or even redone, the lifecycle hooks of the React coordination phase may be called multiple times !, for example, componentWillMount may be called twice.
  • Therefore, the lifecycle hook of the coordination phase cannot contain side effects, so the hook is discarded

Complete the reconciliation process. Here we use Depth First Search(DFS), which processes the child nodes first and then the sibling nodes until the loop is completed.

Commit phase of React

Involving life hooks

  • componentDidMount
  • componentDidUpdate
  • ~ componentWillUnmount ~(obsolete)
  • getSnapshotBeforeUpdate

render and commit:cannot be paused, the interface will be updated until completion

How does Fiber handle priority?

For the UI, the following issues need to be considered:

Not all state updates need to be displayed immediately, such as:

  • The update of the part outside the screen is not the same as the update priority
  • The response priority entered by the user is higher than the response priority filled by the request
  • Ideally, for some high-priority operations, it should be possible to interrupt the execution of low-priority operations

Therefore, React defines a series of event priorities

Below is the source code of priority time

\ [Source file ]( https://github.com/facebook/r... )

  var maxSigned31BitInt = 1073741823;

  //Times out immediately
  var IMMEDIATE_PRIORITY_TIMEOUT = -1;
  //Eventually times out
  var USER_BLOCKING_PRIORITY = 250;
  var NORMAL_PRIORITY_TIMEOUT = 5000;
  var LOW_PRIORITY_TIMEOUT = 10000;
  //Never times out
  var IDLE_PRIORITY = maxSigned31BitInt;

When an update task comes, it will not do the Diff operation immediately, but will first send the current update to an Update Queue, and then give it to the Scheduler to process, the Scheduler will process according to the current main thread usage This time Update.

No matter how the execution process is split and executed in any order, Fiber will ensure the consistency of the state and the consistency of the view.

How to ensure that tasks with the same priority and the same priority triggered within a certain period of time have the same due time? React is implemented through the ceiling method. . . This recipe has never used | syntax ...

Below is the source code of ceiling which handles the expiration time

\ [Source file ]( https://github.com/facebook/r... )

function ceiling(num, precision) {
  return(((num/precision) | 0) + 1) * precision;
}

So why do you need to guarantee time consistency? Please see below.

How to schedule Fiber?

First, we must find the schedule entry address scheduleUpdateOnFiber,

Each root has a unique scheduled task. If it already exists, we must ensure that the expiration time is the same as the next level task(so use the ceiling method mentioned above to control the expiration time)

Source File

export function scheduleUpdateOnFiber(
  fiber:Fiber,
  expirationTime:ExpirationTime,

) {
checkForNestedUpdates();
warnAboutRenderPhaseUpdatesInDEV(fiber);

  //Call markUpdateTimeFromFiberToRoot to update the expirationTime of the fiber node
  //ps The fiber tree at this time has only one root fiber.
  const root = markUpdateTimeFromFiberToRoot(fiber, expirationTime);
  if(root === null) {
    warnAboutUpdateOnUnmountedFiberInDEV(fiber);
    return;
  }

  //TODO:computeExpirationForFiber also reads the priority. Pass the
  //priority as an argument to that function and this one.
  //Only TODO
  //computeExpirationForFiber will also read the priority.
  //Pass the priority as a parameter to the function and the function.
  const priorityLevel = getCurrentPriorityLevel();

  if(expirationTime === Sync) {
    if(
      //Check if we're inside unbatchedUpdates
      //Check if it is in an unbatch update
     (executionContext & LegacyUnbatchedContext)! == NoContext &&
      //Check if we're not already rendering
      //Check if it has not been rendered
     (executionContext &(RenderContext | CommitContext)) === NoContext
   ) {
      //Register pending interactions on the root to avoid losing traced interaction data.
      //Register pending interactions on the root to avoid losing tracked interaction data.
      schedulePendingInteractions(root, expirationTime);

      //This is a legacy edge case. The initial mount of a ReactDOM.render-ed
      //root inside of batchedUpdates should be synchronous, but layout updates
      //should be deferred until the end of the batch.
      performSyncWorkOnRoot(root);
    } else {
      ensureRootIsScheduled(root);
      schedulePendingInteractions(root, expirationTime);
      if(executionContext === NoContext) {
        //Flush the synchronous work now, unless we're already working or inside
        //a batch. This is intentionally inside scheduleUpdateOnFiber instead of
        //scheduleCallbackForFiber to preserve the ability to schedule a callback
        //without immediately flushing it. We only do this for user-initiated
        //updates, to preserve historical behavior of legacy mode.
        //Push into the scheduling task queue
        flushSyncCallbackQueue();
      }
    }
  } else {
    //Schedule a discrete update but only if it's not Sync.
    if(
     (executionContext & DiscreteEventContext)! == NoContext &&
      //Only updates at user-blocking priority or greater are considered
      //discrete, even inside a discrete event.
     (priorityLevel === UserBlockingPriority ||
        priorityLevel === ImmediatePriority)
   ) {
      //This is the result of a discrete event. Track the lowest priority
      //discrete update per root so we can flush them early, if needed.
      if(rootsWithPendingDiscreteUpdates === null) {
        rootsWithPendingDiscreteUpdates = new Map([[root, expirationTime]]);
      } else {
        const lastDiscreteTime = rootsWithPendingDiscreteUpdates.get(root);
        if(
          lastDiscreteTime === undefined ||
          lastDiscreteTime> expirationTime
       ) {
          rootsWithPendingDiscreteUpdates.set(root, expirationTime);
        }
      }
    }
    //Schedule other updates after in case the callback is sync.
    ensureRootIsScheduled(root);
    schedulePendingInteractions(root, expirationTime);
  }
}

The above source code mainly does the following things

  1. Call markUpdateTimeFromFiberToRoot to update expirationTime of Fiber node

  2. ensureRootIsScheduled(update focus)

  3. schedulePendingInteractions will actually call scheduleInteractions

    • scheduleInteractions will use FiberRoot's pendingInteractionMap attribute and different expirationTime to obtain the set of update tasks required for each schedule, record their number, and detect whether these tasks will go wrong.

The focus of the update is that scheduleUpdateOnFiber will callfunction ensureRootIsScheduled(root:FiberRoot)for every update

Below is the source code of ensureRootIsScheduled

Source File

function ensureRootIsScheduled(root:FiberRoot) {
  const lastExpiredTime = root.lastExpiredTime;
  if(lastExpiredTime! == NoWork) {
    //Special case:Expired work should flush synchronously.
    root.callbackExpirationTime = Sync;
    root.callbackPriority_old = ImmediatePriority;
    root.callbackNode = scheduleSyncCallback(
      performSyncWorkOnRoot.bind(null, root),
   );
    return;
  }

  const expirationTime = getNextRootExpirationTimeToWorkOn(root);
  const existingCallbackNode = root.callbackNode;
  if(expirationTime === NoWork) {
    //There's nothing to work on.
    if(existingCallbackNode! == null) {
      root.callbackNode = null;
      root.callbackExpirationTime = NoWork;
      root.callbackPriority_old = NoPriority;
    }
    return;
  }

  //TODO:If this is an update, we already read the current time. Pass the
  //time as an argument.
  const currentTime = requestCurrentTimeForUpdate();
  const priorityLevel = inferPriorityFromExpirationTime(
    currentTime,
    expirationTime,
 );

  //If there's an existing render task, confirm it has the correct priority and
  //expiration time. Otherwise, we'll cancel it and schedule a new one.
  if(existingCallbackNode! == null) {
    const existingCallbackPriority = root.callbackPriority_old;
    const existingCallbackExpirationTime = root.callbackExpirationTime;
    if(
      //Callback must have the exact same expiration time.
      existingCallbackExpirationTime === expirationTime &&
      //Callback must have greater or equal priority.
      existingCallbackPriority> = priorityLevel
   ) {
      //Existing callback is sufficient.
      return;
    }
    //Need to schedule a new task.
    //TODO:Instead of scheduling a new task, we should be able to change the
    //priority of the existing one.
    cancelCallback(existingCallbackNode);
  }

  root.callbackExpirationTime = expirationTime;
  root.callbackPriority_old = priorityLevel;

  let callbackNode;
  if(expirationTime === Sync) {
    //Sync React callbacks are scheduled on a special internal queue
    callbackNode = scheduleSyncCallback(performSyncWorkOnRoot.bind(null, root));
  } else if(disableSchedulerTimeoutBasedOnReactExpirationTime) {
    callbackNode = scheduleCallback(
      priorityLevel,
      performConcurrentWorkOnRoot.bind(null, root),
   );
  } else {
    callbackNode = scheduleCallback(
      priorityLevel,
      performConcurrentWorkOnRoot.bind(null, root),
      //Compute a task timeout based on the expiration time. This also affects
      //ordering because tasks are processed in timeout order.
      {timeout:expirationTimeToMs(expirationTime)-now()},
   );
  }

  root.callbackNode = callbackNode;
}

The above source code ensureRootIsScheduled is mainly to do different push functions according to the synchronous/asynchronous state.

Synchronous scheduling function scheduleSyncCallback(callback:SchedulerCallback):

  • If the queue is not empty, push it into the synchronization queue(syncQueue.push(callback))
  • If it is empty, push it immediately Task scheduling queue(Scheduler_scheduleCallback)
  • Will performSyncWorkOnRoot as SchedulerCallback

The following is the source code content of scheduleSyncCallback

Source File

export function scheduleSyncCallback(callback:SchedulerCallback) {
  //Push this callback into an internal queue. We'll flush these either in
  //the next tick, or earlier if something calls `flushSyncCallbackQueue`.
  if(syncQueue === null) {
    syncQueue = [callback];
    //Flush the queue in the next tick, at the earliest.
    immediateQueueCallbackNode = Scheduler_scheduleCallback(
      Scheduler_ImmediatePriority,
      flushSyncCallbackQueueImpl,
   );
  } else {
    //Push onto existing queue. Don't need to schedule a callback because
    //we already scheduled one when we created the queue.
    syncQueue.push(callback);
  }
  return fakeCallbackNode;
}

Asynchronous scheduling, asynchronous task scheduling is very simple, push asynchronous tasks directly into the scheduling queue(Scheduler_scheduleCallback), and will performConcurrentWorkOnRoot as SchedulerCallback

export function scheduleCallback(
  reactPriorityLevel:ReactPriorityLevel,
  callback:SchedulerCallback,
  options:SchedulerCallbackOptions | void | null,

) {
const priorityLevel = reactPriorityToSchedulerPriority(reactPriorityLevel);
return Scheduler_scheduleCallback(priorityLevel, callback, options);
}

Regardless of synchronous or asynchronous scheduling, it will go through Scheduler_scheduleCallback which is the core method of schedulingfunction unstable_scheduleCallback(priorityLevel, callback, options), they will have their own SchedulerCallback

Tip:Since peek will be used in many of the following codes, first insert a peek implementation, which is actually returning the first or null in the array

peek related source files

  export function peek(heap:Heap):Node | null {
    const first = heap [0];
    return first === undefined? null:first;
  }

The following is the source code of Scheduler_scheduleCallback

\ [Source file ]( https://github.com/facebook/r... )

//Push a task into the task scheduling queue
function unstable_scheduleCallback(priorityLevel, callback, options) {
  var currentTime = getCurrentTime();

  var startTime;
  var timeout;
  if(typeof options === 'object' && options! == null) {
    var delay = options.delay;
    if(typeof delay === 'number' && delay> 0) {
      startTime = currentTime + delay;
    } else {
      startTime = currentTime;
    }
    timeout =
      typeof options.timeout === 'number'
        ? options.timeout
        :timeoutForPriorityLevel(priorityLevel);
  } else {
    //Calculate different expiration time for different priorities
    timeout = timeoutForPriorityLevel(priorityLevel);
    startTime = currentTime;
  }

   //Define new expiration time
  var expirationTime = startTime + timeout;

  //define a new task
  var newTask = {
    id:taskIdCounter ++,
    callback,
    priorityLevel,
    startTime,
    expirationTime,
    sortIndex:-1,
  };
  if(enableProfiling) {
    newTask.isQueued = false;
  }

  if(startTime> currentTime) {
    //This is a delayed task.
    newTask.sortIndex = startTime;

    //Push the timeout task to the timeout queue
    push(timerQueue, newTask);
    if(peek(taskQueue) === null && newTask === peek(timerQueue)) {
      //All tasks are delayed, and this is the task with the earliest delay.
      //When all tasks are delayed, and the task is the oldest task
      if(isHostTimeoutScheduled) {
        //Cancel an existing timeout.
        cancelHostTimeout();
      } else {
        isHostTimeoutScheduled = true;
      }
      //Schedule a timeout.
      requestHostTimeout(handleTimeout, startTime-currentTime);
    }
  } else {
    newTask.sortIndex = expirationTime;

    //Push the new task into the task queue
    push(taskQueue, newTask);
    if(enableProfiling) {
      markTaskStart(newTask, currentTime);
      newTask.isQueued = true;
    }
    //Schedule a host callback, if needed. If we're already performing work,
    //wait until the next time we yield.
    //execute the callback method, if you have to work again, you need to wait for the completion of a callback
    if(! isHostCallbackScheduled &&! isPerformingWork) {
      isHostCallbackScheduled = true;
       (flushWork);
    }
  }

  return newTask;
}

Tips:markTaskStart mainly plays the function of recording, corresponding to markTaskCompleted

Source File

export function markTaskStart(
  task:{
    id:number,
    priorityLevel:PriorityLevel,
    ...
  },
  ms:number,

) {
if(enableProfiling) {
profilingState [QUEUE_SIZE]++;

    if(eventLog! == null) {
      //performance.now returns a float, representing milliseconds. When the
      //event is logged, it's coerced to an int. Convert to microseconds to
      //maintain extra degrees of precision.
      logEvent([TaskStartEvent, ms * 1000, task.id, task.priorityLevel]);
    }
  }
}

export function markTaskCompleted(
  task:{
    id:number,
    priorityLevel:PriorityLevel,
    ...
  },
  ms:number,

) {
if(enableProfiling) {
profilingState [PRIORITY]= NoPriority;
profilingState [CURRENT_TASK_ID]= 0;
profilingState [QUEUE_SIZE]-;

    if(eventLog! == null) {
      logEvent([TaskCompleteEvent, ms * 1000, task.id]);
    }
  }
}

unstable_scheduleCallback` mainly does a few things

  • Get expirationTime of newTask through options.delay and options.timeout plus timeoutForPriorityLevel()

  • If the task has expired

    • Push overtime tasks into the overtime queue
    • If all tasks are delayed and the task is the oldest task, cancelHostTimeout will be called
    • Call requestHostTimeout
  • Push new tasks into the task queue

Source File

Make up the source code of cancelHostTimeout

  cancelHostTimeout = function() {
    clearTimeout(_timeoutID);
  };

Add the source code of requestHostTimeout

  requestHostTimeout = function(cb, ms) {
    _timeoutID = setTimeout(cb, ms);
  };

Then what is cb of requestHostTimeout which is handleTimeout?

  function handleTimeout(currentTime) {
    isHostTimeoutScheduled = false;
    advanceTimers(currentTime);

    if(! isHostCallbackScheduled) {
      if(peek(taskQueue)! == null) {
        isHostCallbackScheduled = true;
        requestHostCallback(flushWork);
      } else {
        const firstTimer = peek(timerQueue);
        if(firstTimer! == null) {
          requestHostTimeout(handleTimeout, firstTimer.startTime-currentTime);
        }
      }
    }
  }

The above method is very important, it mainly does the following things

  1. Call advanceTimers to check tasks that are no longer delayed and add them to the queue.

The following is the source code of advanceTimers

function advanceTimers(currentTime) {
  //Check for tasks that are no longer delayed and add them to the queue.
  let timer = peek(timerQueue);
  while(timer! == null) {
    if(timer.callback === null) {
      //Timer was cancelled.
      pop(timerQueue);
    } else if(timer.startTime <= currentTime) {
      //Timer fired. Transfer to the task queue.
      pop(timerQueue);
      timer.sortIndex = timer.expirationTime;
      push(taskQueue, timer);
      if(enableProfiling) {
        markTaskStart(timer, currentTime);
        timer.isQueued = true;
      }
    } else {
      //Remaining timers are pending.
      return;
    }
    timer = peek(timerQueue);
  }
}
  1. Call requestHostCallback to start task scheduling via MessageChannel asynchronous method performWorkUntilDeadline

requestHostCallback This method is particularly important

Source File

//Call performWorkUntilDeadline method through onmessage
channel.port1.onmessage = performWorkUntilDeadline;

//postMessage
requestHostCallback = function(callback) {
  scheduledHostCallback = callback;
  if(! isMessageLoopRunning) {
    isMessageLoopRunning = true;
    port.postMessage(null);
  }
};

Then the performWorkUntilDeadline in the same file, called scheduledHostCallback, which is the flushWork previously passed

const performWorkUntilDeadline =() => {
  if(scheduledHostCallback! == null) {
    const currentTime = getCurrentTime();
    //Yield after `yieldInterval` ms, regardless of where we are in the vsync
    //cycle. This means there's always time remaining at the beginning of
    //the message event.
    deadline = currentTime + yieldInterval;
    const hasTimeRemaining = true;
    try {
      const hasMoreWork = scheduledHostCallback(
        hasTimeRemaining,
        currentTime,
     );
      if(! hasMoreWork) {
        isMessageLoopRunning = false;
        scheduledHostCallback = null;
      } else {
        //If there's more work, schedule the next message event at the end
        //of the preceding one.
        port.postMessage(null);
      }
    } catch(error) {
      //If a scheduler task throws, exit the current browser task so the
      //error can be observed.
      port.postMessage(null);
      throw error;
    }
  } else {
    isMessageLoopRunning = false;
  }
  //Yielding to the browser will give it a chance to paint, so we can
  //reset this.
  needsPaint = false;
};

The main role of flushWork is to call workLoop to execute all tasks in a loop

Source File

function flushWork(hasTimeRemaining, initialTime) {
  if(enableProfiling) {
    markSchedulerUnsuspended(initialTime);
  }

  //We'll need a host callback the next time work is scheduled.
  isHostCallbackScheduled = false;
  if(isHostTimeoutScheduled) {
    //We scheduled a timeout but it's no longer needed. Cancel it.
    isHostTimeoutScheduled = false;
    cancelHostTimeout();
  }

  isPerformingWork = true;
  const previousPriorityLevel = currentPriorityLevel;
  try {
    if(enableProfiling) {
      try {
        return workLoop(hasTimeRemaining, initialTime);
      } catch(error) {
        if(currentTask! == null) {
          const currentTime = getCurrentTime();
          markTaskErrored(currentTask, currentTime);
          currentTask.isQueued = false;
        }
        throw error;
      }
    } else {
      //No catch in prod codepath.
      return workLoop(hasTimeRemaining, initialTime);
    }
  } finally {
    currentTask = null;
    currentPriorityLevel = previousPriorityLevel;
    isPerformingWork = false;
    if(enableProfiling) {
      const currentTime = getCurrentTime();
      markSchedulerSuspended(currentTime);
    }
  }
}

workLoop and flushWork in a file, the role is to take the highest priority task from the scheduling task queue and then execute it.

Remember the "SchedulerCallback" mentioned above?

  • For sync task, performPerformSyncWorkOnRoot

  • For asynchronous tasks, performConcurrentWorkOnRoot is executed

    function workLoop(hasTimeRemaining, initialTime) {

    let currentTime = initialTime;
    advanceTimers(currentTime);
    currentTask = peek(taskQueue);
    while(
      currentTask! == null &&
      !(enableSchedulerDebugging && isSchedulerPaused)

    ) {

      if(
        currentTask.expirationTime> currentTime &&
       (! hasTimeRemaining || shouldYieldToHost())
     ) {
        //This currentTask hasn't expired, and we've reached the deadline.
        break;
      }
      const callback = currentTask.callback;
      if(callback! == null) {
        currentTask.callback = null;
        currentPriorityLevel = currentTask.priorityLevel;
        const didUserCallbackTimeout = currentTask.expirationTime <= currentTime;
        markTaskRun(currentTask, currentTime);
        const continuationCallback = callback(didUserCallbackTimeout);
        currentTime = getCurrentTime();
        if(typeof continuationCallback === 'function') {
          currentTask.callback = continuationCallback;
          markTaskYield(currentTask, currentTime);
        } else {
          if(enableProfiling) {
            markTaskCompleted(currentTask, currentTime);
            currentTask.isQueued = false;
          }
          if(currentTask === peek(taskQueue)) {
            pop(taskQueue);
          }
        }
        advanceTimers(currentTime);
      } else {
        pop(taskQueue);
      }
      currentTask = peek(taskQueue);
    }
    //Return whether there's additional work
    if(currentTask! == null) {
      return true;
    } else {
      const firstTimer = peek(timerQueue);
      if(firstTimer! == null) {
        requestHostTimeout(handleTimeout, firstTimer.startTime-currentTime);
      }
      return false;
    }

    }

Eventually, it will operate through performUnitOfWork.

This method is just an asynchronous method that can be interrupted. We need to check whether it times out every time we call it.

Source File

function performUnitOfWork(unitOfWork:Fiber):void {
  //The current, flushed, state of this fiber is the alternate. Ideally
  //nothing should rely on this, but relying on it here means that we don't
  //need an additional field on the work in progress.
  const current = unitOfWork.alternate;
  setCurrentDebugFiberInDEV(unitOfWork);

  let next;
  if(enableProfilerTimer &&(unitOfWork.mode & ProfileMode)! == NoMode) {
    startProfilerTimer(unitOfWork);
    next = beginWork(current, unitOfWork, renderExpirationTime);
    stopProfilerTimerIfRunningAndRecordDelta(unitOfWork, true);
  } else {
    next = beginWork(current, unitOfWork, renderExpirationTime);
  }

  resetCurrentDebugFiberInDEV();
  unitOfWork.memoizedProps = unitOfWork.pendingProps;
  if(next === null) {
    //If this doesn't spawn new work, complete the current work.
    completeUnitOfWork(unitOfWork);
  } else {
    workInProgress = next;
  }

  ReactCurrentOwner.current = null;
}

The above startProfilerTimer and stopProfilerTimerIfRunningAndRecordDelta are actually recording the working hours of fiber.

Source File

function startProfilerTimer(fiber:Fiber):void {
  if(! enableProfilerTimer) {
    return;
  }

  profilerStartTime = now();

  if(((fiber.actualStartTime:any):number) <0) {
    fiber.actualStartTime = now();
  }
}

function stopProfilerTimerIfRunningAndRecordDelta(
  fiber:Fiber,
  overrideBaseTime:boolean,

):void {
if(! enableProfilerTimer) {
return;
}

  if(profilerStartTime> = 0) {
    const elapsedTime = now()-profilerStartTime;
    fiber.actualDuration + = elapsedTime;
    if(overrideBaseTime) {
      fiber.selfBaseDuration = elapsedTime;
    }
    profilerStartTime = -1;
  }
}

Finally, the beginWork process is reached--. What's in it? workInProgress also has a lot of switch case.

If you want to see the beginWork source code, you can try beginWork related source code file

to sum up

The last part is the summary part. Should I write this for a long time? Each reader's perception of the source code under different moods at different times should be different(of course, it is also the reader when he reviews it). Each look should have a summary of each period.

But if you don't write a summary, this analysis feels boring and has no results. So simply skip it(it must be original, there is no other place)

  1. fiber is actually a node, which is the traversal form of linked list
  2. Fiber calculates expirationTime by priority to get expiration time
  3. Because of the linked list structure, time slicing can be easily interrupted and restored
  4. The implementation of time slicing is achieved through settimeout + postMessage
  5. When all tasks are delayed, clearTimeout will be executed
  6. Calculation of the number of tasks and working hours

Why fiber use linked list

The use of a linked list structure is only a result, not an end. The original purpose of React developers was to simulate the call stack.

The call stack is most often used to store the return address of a subroutine. When calling any subroutine, the main program must temporarily store the address to which the subroutine should return after it is finished. Therefore, if the called subroutine calls other subroutines, its own return address must be stored in the call stack and retrieved after its own execution is completed. In addition to the return address, local variables, function parameters, and environment transfers are also saved.

Therefore, the Fiber object is designed as a linked list structure, and a linked list is formed by the following main attributes

  • type type
  • return stores the parent node of the current node
  • child stores the first child node
  • sibling stores the first sibling node on the right
  • alternate equivalent node of the old tree

When we traverse the dom tree diff, even if it is interrupted, we only need to remember the one node at the time of the interruption, and we can resume the traversal and diff in the next time slice recovery. This is a major benefit of the linked list of fiber data structures.

Why not use time slice requestIdleCallback

Events executed by the browser in cycles

  1. Macro tasks
  2. Microtasks
  4. requestAnimationFrame
  5. IntersectionObserver
  6. Update interface
  7. requestIdleCallback
  8. Next frame

According to official ) description:

The window.requestIdleCallback() method queues functions that are called during the idle time of the browser. This enables developers to perform background and low-priority work on the main event loop without affecting delayed critical events such as animations and input responses. Functions are generally executed in the order of first-to-first call. However, if the callback function specifies an execution timeout of timeout, it is possible to disrupt the execution order in order to execute the function before the timeout.
You can call requestIdleCallback() in the idle callback function to schedule another callback before the next time through the event loop.

It seems to perfectly fit the idea of time slicing, so at the beginning React's time slice rendering wanted to use this API, but the current browser support is not forceful, and requestIdleCallback is a bit too strict, and the execution frequency is not enough to achieve smooth UI rendering.

And we hope that through the Fiber architecture, the conconcilation process can be interrupted. Give up CPU execution rights at the right time. Therefore, the React team had to implement its own version.

In fact, the idea of Fiber and the concept of coroutine are consistent. Give a chestnut:

Common functions:(cannot be interrupted and resumed)

const tasks = []
function run() {
  let task
  while(task = tasks.shift()) {
    execute(task)
  }
}

If you use Generator syntax:

const tasks = []
function * run() {
  let task

  while(task = tasks.shift()) {
    //Determine if there are high-priority events that need to be processed, if so give control
    if(hasHighPriorityEvent()) {
      yield
    }

    //After processing high-priority events, restore the function call stack and continue execution ...
    execute(task)
  }
}

But React tried to use Generator, and later found it very troublesome, and gave up.

Why not use Generator for time slicing

There are 2 main reasons:

  1. Generator must wrap every function in the Generator stack. This not only adds a lot of syntax overhead, but also increases the runtime overhead in existing implementations. Although better than nothing, performance problems still exist.
  2. ,

Web Workers

   `Web Worker`                  

React VM JavaScript

                                                                               VM                                                                                                                   JS                                                      Web Assembly                  

                                                                               CPU   

                                                           React.Component                                    

ps: React React

  1. https://github.com/zhongmeizhi/FED-note
  2. Vue