NeRF - Neural Radiance Fields for View Synthesis (2D to 3D) - DEMO

Published: 16 December 2021
on channel: Data Science Garage
882
8

NeRF stands for Neural Radiance Fields for View Synthesis. This problem can be set to the Computer Vision (CV) domain and now the demand of NeRF is growing.

In this video I demonstrate the quick results of minimal implementation of volumetric rendering provided at https://keras.io/examples/vision/nerf/ (the code is written in Keras with Python).

Example code is here: https://keras.io/examples/vision/nerf/

The NeRF is a method which achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views.

Input to neural network:
spatial location (x, y, z).
viewing direction (teta, gama).

In NeRF there are used fully-connected neural network which outputs:
output colos.
output density.

A scene is a fully-connected (non-convolutional) deep network. This helps to represent a scene as a continuous 5D function.

NeRF is also promising in fields such as AI for Medicine, Geometry Visualization, View-Dependent Appearance, creating 3D meshes, capturing 360 scenes, Positional Encoding and much more.

Good to read more:
Original paper: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (https://arxiv.org/pdf/2003.08934.pdf)
One of the authors's super page about NeRF: https://www.matthewtancik.com/nerf


Watch video NeRF - Neural Radiance Fields for View Synthesis (2D to 3D) - DEMO online without registration, duration hours minute second in high quality. This video was added by user Data Science Garage 16 December 2021, don't forget to share it with your friends and acquaintances, it has been viewed on our site 88 once and liked it people.