Recherche avancée

Médias (0)

Mot : - Tags -/signalement

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (46)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Taille des images et des logos définissables

    9 février 2011, par

    Dans beaucoup d’endroits du site, logos et images sont redimensionnées pour correspondre aux emplacements définis par les thèmes. L’ensemble des ces tailles pouvant changer d’un thème à un autre peuvent être définies directement dans le thème et éviter ainsi à l’utilisateur de devoir les configurer manuellement après avoir changé l’apparence de son site.
    Ces tailles d’images sont également disponibles dans la configuration spécifique de MediaSPIP Core. La taille maximale du logo du site en pixels, on permet (...)

  • Configuration spécifique d’Apache

    4 février 2011, par

    Modules spécifiques
    Pour la configuration d’Apache, il est conseillé d’activer certains modules non spécifiques à MediaSPIP, mais permettant d’améliorer les performances : mod_deflate et mod_headers pour compresser automatiquement via Apache les pages. Cf ce tutoriel ; mode_expires pour gérer correctement l’expiration des hits. Cf ce tutoriel ;
    Il est également conseillé d’ajouter la prise en charge par apache du mime-type pour les fichiers WebM comme indiqué dans ce tutoriel.
    Création d’un (...)

Sur d’autres sites (5035)

  • FFmpeg Processes keep running even after finishAndRemoveTask()

    9 octobre 2018, par kataroty

    I have a program that gets Audio files and converts them using Bravobit FFmpeg Bravobit FFmpeg github.

    It works as intended until someone minimizes screen during the convertion process. I am trying to fix that but so far I have not been successful.

    This part is in the Main Activity that creates and starts my AudioProcessor :

    AudioProcessor ac = new AudioProcessor(getApplicationContext(), PostNewActivity.this);
                   ac.setBackgroundMp3File(backgroundAudio);
                   ac.setMicPcmFile(micOutputPCM);
                   ac.setOutputFile(tempOutFile);
                   ac.setListener(new AudioProcessor.AudioProcessorListener() {
                       public void onStart() {
                           Log.d("Audioprocessor", "Audioprocessor is successful");
                       }

                       public void onSuccess(File output) {
                           Log.d("Audioprocessor", "Audioprocessor is successful");
                           goToPublishView();
                       }

                       public void onError(String message) {
                           System.out.println("Audioprocessor: " + message);

                       }

                       public void onFinish() {
                           Log.d("Audioprocessor", "Audioprocessor is finshed");
                      }


                   });

                   try {
                       if (tempOutFile.exists()) {
                           tempOutFile.delete();
                       }
                       ac.process();
                   } catch (Exception ex) {
                       System.out.println("Processing failed!");
                       ex.printStackTrace();
                   }

    And here is the AudioProcessor itself :

    public class AudioProcessor {

       private Context context;
       private FFmpeg ffmpeg;
       private AudioProcessorListener listener;

       private File micPcmFile;
       private File backgroundMp3File;

       private File pcmtowavTempFile;
       private File mp3towavTempFile;
       private File combinedwavTempFile;

       private File outputFile;
       private File volumeChangedTempFile;

       TextView extensionDownload, percentProgress;


       public AudioProcessor(Context context, Activity activity) {
           ffmpeg = null;
           ffmpeg = FFmpeg.getInstance(context);
           percentProgress = activity.findViewById(R.id.percentProgress);
           percentProgress.setSingleLine(false);
           this.context = context;
           prepare();
       }

       /**
        * Program main method. Starts running program
        * @throws Exception
        */
       public void process() throws Exception {
           if (!ffmpeg.isSupported()) {
               Log.e("AudioProcessor", "FFMPEG not supported! Cannot convert audio!");
               throw new Exception("FFMPeg has to be supported");
           }
           if (!checkIfAllFilesPresent()) {
               Log.e("AudioProcessor", "All files are not set yet. Please set file first");
               throw new Exception("Files are not set!");
           }

           Log.e("AudioProcessor", "Start processing audio");
           listener.onStart();



           Handler handler = new Handler();
           handler.postDelayed(new Runnable() {
               @Override
               public void run() {
                   convertPCMToWav();
               }
           }, 200);
       }

       /**
        * Prepares program
        */
       private void prepare() {
           prepareTempFiles();
       }

       /**
        * Converts PCM to wav file. Automatically create new file.
        */
       private void convertPCMToWav() {
           Log.e("AudioProcessor", "Convert PCM TO Wav");
           //ffmpeg -f s16le -ar 44.1k -ac 2 -i file.pcm file.wav
           String[] cmd = { "-f" , "s16le", "-ar", "44.1k", "-i", micPcmFile.toString(), "-y", pcmtowavTempFile.toString()};
           ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {

               @Override
               public void onStart() {
                   super.onStart();
                   percentProgress.setVisibility(View.VISIBLE);
                   percentProgress.setText("Converting your recording\n"+"1/5");
               }

               @Override
               public void onSuccess(String message) {
                   super.onSuccess(message);
                   convertMP3ToWav();
               }
               @Override
               public void onFailure(String message) {
                   super.onFailure(message);
                   onError(message);
                   convertPCMToWav();
               }
           });
       }

       /**
        * Converts mp3 file to wav file.
        * Automatically creates Wav file
        */
       private void convertMP3ToWav() {
           Log.e("AudioProcessor", "Convert MP3 TO Wav");

           //ffmpeg -i file.mp3 file.wav
           String[] cmd = { "-i" , backgroundMp3File.toString(), "-y", mp3towavTempFile.toString() };
           ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {
               @Override
               public void onStart() {
                   super.onStart();
                   percentProgress.setText("Converting background audio\n"+"2/5");
                   Log.e("AudioProcessor", "Starging convertgf MP3 TO Wav");
               }

               @Override
               public void onSuccess(String message) {
                   super.onSuccess(message);
                   changeMicAudio();
               }
               @Override
               public void onFailure(String message) {
                   super.onFailure(message);
                   Log.e("AudioProcessor", "Failed to convert MP3 TO Wav");
                   //onError(message);
                   throw new RuntimeException("Failed to convert MP3 TO Wav");
                   //convertMP3ToWav();
               }
           });
       }

       /**
        * Combines 2 wav files into one wav file. Overlays audio
        */
       private void combineWavs() {
           Log.e("AudioProcessor", "Combine wavs");
           //ffmpeg -i C:\Users\VR1\Desktop\_mp3.wav -i C:\Users\VR1\Desktop\_pcm.wav -filter_complex amix=inputs=2:duration=first:dropout_transition=3 C:\Users\VR1\Desktop\out.wav

           String[] cmd = { "-i" , pcmtowavTempFile.toString(), "-i", volumeChangedTempFile.toString(), "-filter_complex", "amix=inputs=2:duration=first:dropout_transition=3", "-y",combinedwavTempFile.toString()};
           ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {

               @Override
               public void onStart() {
                   super.onStart();
                   percentProgress.setText("Combining the two audio files\n"+"4/5");
               }

               @Override
               public void onSuccess(String message) {
                   super.onSuccess(message);
                   encodeWavToAAC();

               }
               @Override
               public void onFailure(String message) {
                   super.onFailure(message);
                   onError(message);
               }
           });
       }

       private void changeMicAudio(){
           Log.e("AudioProcessor", "Change audio volume");
           //ffmpeg -i input.wav -filter:a "volume=1.5" output.wav

           String[] cmdy = { "-i", mp3towavTempFile.toString(),  "-af", "volume=0.9", "-y",volumeChangedTempFile.toString()};
           ffmpeg.execute(cmdy, new ExecuteBinaryResponseHandler() {

               @Override
               public void onStart() {
                   super.onStart();
                   percentProgress.setText("Normalizing volume\n"+"3/5");
               }

               @Override
               public void onSuccess(String message) {
                   combineWavs();
                   super.onSuccess(message);
               }
               @Override
               public void onFailure(String message) {
                   super.onFailure(message);
                   Log.e("AudioProcessor", message);
               }
           });
       }


       /**
        * Do something on error. Releases program data (deletes files)
        * @param message
        */
       private void onError(String message) {
           release();
           if (listener != null) {
               //listener.onError(message);
           }
       }
       /**
        * Encode to AAC
        */
       private void encodeWavToAAC() {
           Log.e("AudioProcessor", "Encode to AAC");
           //ffmpeg -i file.wav -c:a aac -b:a 128k -f adts output.m4a
           String[] cmd = { "-i" , combinedwavTempFile.toString(), "-c:a", "aac", "-b:a", "128k", "-f", "adts", "-y",outputFile.toString()};
           ffmpeg.execute(cmd, new ExecuteBinaryResponseHandler() {

               @Override
               public void onStart() {
                   super.onStart();
                   percentProgress.setText("Normalizing volume\n"+"3/5");
               }

               @Override
               public void onSuccess(String message) {
                   super.onSuccess(message);
                   if (listener != null) {
                       listener.onSuccess(outputFile);
                   }
                   release();
               }
               @Override
               public void onFailure(String message) {
                   super.onFailure(message);
                   onError(message);
                   encodeWavToAAC();
               }
           });
       }

       /**
        * Uninitializes class
        */
       private void release() {
           if (listener != null) {
               listener.onFinish();
           }
           destroyTempFiles();
       }

       /**
        * Prepares temp required files by deleteing them if they exsist.
        * Files cannot exists before ffmpeg actions. FFMpeg automatically creates those files.
        */
       private void prepareTempFiles() {
           pcmtowavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_pcm.wav");
           mp3towavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_mp3.wav");
           combinedwavTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_combined.wav");
           volumeChangedTempFile = new File(context.getFilesDir()+ Common.TEMP_LOCAL_DIR + "/" + "_volumeChanged.wav");

           destroyTempFiles();
       }

       /**
        * Destroys temp required files
        */
       private void destroyTempFiles() {

           pcmtowavTempFile.delete();
           mp3towavTempFile.delete();
           combinedwavTempFile.delete();
           volumeChangedTempFile.delete();
       }

       /**
        * Checks if all files are set, so we can process them
        * @return - all files ready
        */
       private boolean checkIfAllFilesPresent() {
           if(micPcmFile == null || backgroundMp3File == null || outputFile == null) {
               Log.e("AudioProcessor", "All files are not set! Set all files!");
               return false;
           }
           return true;
       }

       public void setOutputFile(File outputFile) {
           this.outputFile = outputFile;
       }

       public void setListener(AudioProcessorListener listener) {
           this.listener = listener;
       }

       public void setMicPcmFile(File micPcmFile) {


           this.micPcmFile = micPcmFile;
       }

       public void setBackgroundMp3File(File backgroundMp3File) {
           this.backgroundMp3File = backgroundMp3File;
       }


       public interface AudioProcessorListener {
           void onStart();
           void onSuccess(File output);
           void onError(String message);
           void onFinish();
       }
    }

    How I am usually testing it and getting the crashes is letting the AudioProcessor get to the 2nd method, which is convertMP3ToWav() and then close the application. When I start it back up again and start processing the files the application crashes.

    I have tried many ways and I thought about throwing the application back to start when it is minimized with this code in the Main Activity

    @Override
    protected void onUserLeaveHint()
    {
       if (Build.VERSION.SDK_INT >= 21) {
           finishAndRemoveTask();
       } else {
           finish();
       }
    }

    I thought that it would stop everything but it still kept crashing. After doing some debugging I found that even when I minimize the app and do the finishAndRemoveTask() the AudioProcessor is still working and it still does all of the ffmpeg commands and even calls the onSuccess()/onFinish() methods.

    How can I completely stop everything or at least stop and clear the AudioProcessor when the application is minimized ?

  • Play video using mse (media source extension) in google chrome

    23 août 2019, par liyuqihxc

    I’m working on a project that convert rtsp stream (ffmpeg) and play it on the web page (signalr + mse).

    So far it works pretty much as I expected on the latest version of edge and firefox, but not chrome.

    here’s the code

    public class WebmMediaStreamContext
    {
       private Process _ffProcess;
       private readonly string _cmd;
       private byte[] _initSegment;
       private Task _readMediaStreamTask;
       private CancellationTokenSource _cancellationTokenSource;

       private const string _CmdTemplate = "-i {0} -c:v libvpx -tile-columns 4 -frame-parallel 1 -keyint_min 90 -g 90 -f webm -dash 1 pipe:";

       public static readonly byte[] ClusterStart = { 0x1F, 0x43, 0xB6, 0x75, 0x01, 0x00, 0x00, 0x00 };

       public event EventHandler<clusterreadyeventargs> ClusterReadyEvent;

       public WebmMediaStreamContext(string rtspFeed)
       {
           _cmd = string.Format(_CmdTemplate, rtspFeed);
       }

       public async Task StartConverting()
       {
           if (_ffProcess != null)
               throw new InvalidOperationException();

           _ffProcess = new Process();
           _ffProcess.StartInfo = new ProcessStartInfo
           {
               FileName = "ffmpeg/ffmpeg.exe",
               Arguments = _cmd,
               UseShellExecute = false,
               CreateNoWindow = true,
               RedirectStandardOutput = true
           };
           _ffProcess.Start();

           _initSegment = await ParseInitSegmentAndStartReadMediaStream();
       }

       public byte[] GetInitSegment()
       {
           return _initSegment;
       }

       // Find the first cluster, and everything before it is the InitSegment
       private async Task ParseInitSegmentAndStartReadMediaStream()
       {
           Memory<byte> buffer = new byte[10 * 1024];
           int length = 0;
           while (length != buffer.Length)
           {
               length += await _ffProcess.StandardOutput.BaseStream.ReadAsync(buffer.Slice(length));
               int cluster = buffer.Span.IndexOf(ClusterStart);
               if (cluster >= 0)
               {
                   _cancellationTokenSource = new CancellationTokenSource();
                   _readMediaStreamTask = new Task(() => ReadMediaStreamProc(buffer.Slice(cluster, length - cluster).ToArray(), _cancellationTokenSource.Token), _cancellationTokenSource.Token, TaskCreationOptions.LongRunning);
                   _readMediaStreamTask.Start();
                   return buffer.Slice(0, cluster).ToArray();
               }
           }

           throw new InvalidOperationException();
       }

       private void ReadMoreBytes(Span<byte> buffer)
       {
           int size = buffer.Length;
           while (size > 0)
           {
               int len = _ffProcess.StandardOutput.BaseStream.Read(buffer.Slice(buffer.Length - size));
               size -= len;
           }
       }

       // Parse every single cluster and fire ClusterReadyEvent
       private void ReadMediaStreamProc(byte[] bytesRead, CancellationToken cancel)
       {
           Span<byte> buffer = new byte[5 * 1024 * 1024];
           bytesRead.CopyTo(buffer);
           int bufferEmptyIndex = bytesRead.Length;

           do
           {
               if (bufferEmptyIndex &lt; ClusterStart.Length + 4)
               {
                   ReadMoreBytes(buffer.Slice(bufferEmptyIndex, 1024));
                   bufferEmptyIndex += 1024;
               }

               int clusterDataSize = BitConverter.ToInt32(
                   buffer.Slice(ClusterStart.Length, 4)
                   .ToArray()
                   .Reverse()
                   .ToArray()
               );
               int clusterSize = ClusterStart.Length + 4 + clusterDataSize;
               if (clusterSize > buffer.Length)
               {
                   byte[] newBuffer = new byte[clusterSize];
                   buffer.Slice(0, bufferEmptyIndex).CopyTo(newBuffer);
                   buffer = newBuffer;
               }

               if (bufferEmptyIndex &lt; clusterSize)
               {
                   ReadMoreBytes(buffer.Slice(bufferEmptyIndex, clusterSize - bufferEmptyIndex));
                   bufferEmptyIndex = clusterSize;
               }

               ClusterReadyEvent?.Invoke(this, new ClusterReadyEventArgs(buffer.Slice(0, bufferEmptyIndex).ToArray()));

               bufferEmptyIndex = 0;
           } while (!cancel.IsCancellationRequested);
       }
    }
    </byte></byte></byte></clusterreadyeventargs>

    I use ffmpeg to convert the rtsp stream to vp8 WEBM byte stream and parse it to "Init Segment" (ebml head、info、tracks...) and "Media Segment" (cluster), then send it to browser via signalR

    $(function () {

       var mediaSource = new MediaSource();
       var mimeCodec = 'video/webm; codecs="vp8"';

       var video = document.getElementById('video');

       mediaSource.addEventListener('sourceopen', callback, false);
       function callback(e) {
           var sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
           var queue = [];

           sourceBuffer.addEventListener('updateend', function () {
               if (queue.length === 0) {
                   return;
               }

               var base64 = queue[0];
               if (base64.length === 0) {
                   mediaSource.endOfStream();
                   queue.shift();
                   return;
               } else {
                   var buffer = new Uint8Array(atob(base64).split("").map(function (c) {
                       return c.charCodeAt(0);
                   }));
                   sourceBuffer.appendBuffer(buffer);
                   queue.shift();
               }
           }, false);

           var connection = new signalR.HubConnectionBuilder()
               .withUrl("/signalr-video")
               .configureLogging(signalR.LogLevel.Information)
               .build();
           connection.start().then(function () {
               connection.stream("InitVideoReceive")
                   .subscribe({
                       next: function(item) {
                           if (queue.length === 0 &amp;&amp; !!!sourceBuffer.updating) {
                               var buffer = new Uint8Array(atob(item).split("").map(function (c) {
                                   return c.charCodeAt(0);
                               }));
                               sourceBuffer.appendBuffer(buffer);
                               console.log(blockindex++ + " : " + buffer.byteLength);
                           } else {
                               queue.push(item);
                           }
                       },
                       complete: function () {
                           queue.push('');
                       },
                       error: function (err) {
                           console.error(err);
                       }
                   });
           });
       }
       video.src = window.URL.createObjectURL(mediaSource);
    })

    chrome just play the video for 3 5 seconds and then stop for buffering, even though there are plenty of cluster transfered and inserted into SourceBuffer.

    here’s the information in chrome ://media-internals/

    Player Properties :

    render_id: 217
    player_id: 1
    origin_url: http://localhost:52531/
    frame_url: http://localhost:52531/
    frame_title: Home Page
    url: blob:http://localhost:52531/dcb25d89-9830-40a5-ba88-33c13b5c03eb
    info: Selected FFmpegVideoDecoder for video decoding, config: codec: vp8 format: 1 profile: vp8 coded size: [1280,720] visible rect: [0,0,1280,720] natural size: [1280,720] has extra data? false encryption scheme: Unencrypted rotation: 0°
    pipeline_state: kSuspended
    found_video_stream: true
    video_codec_name: vp8
    video_dds: false
    video_decoder: FFmpegVideoDecoder
    duration: unknown
    height: 720
    width: 1280
    video_buffering_state: BUFFERING_HAVE_NOTHING
    for_suspended_start: false
    pipeline_buffering_state: BUFFERING_HAVE_NOTHING
    event: PAUSE

    Log

    Timestamp       Property            Value
    00:00:00 00     origin_url          http://localhost:52531/
    00:00:00 00     frame_url           http://localhost:52531/
    00:00:00 00     frame_title         Home Page
    00:00:00 00     url                 blob:http://localhost:52531/dcb25d89-9830-40a5-ba88-33c13b5c03eb
    00:00:00 00     info                ChunkDemuxer: buffering by DTS
    00:00:00 35     pipeline_state      kStarting
    00:00:15 213    found_video_stream  true
    00:00:15 213    video_codec_name    vp8
    00:00:15 216    video_dds           false
    00:00:15 216    video_decoder       FFmpegVideoDecoder
    00:00:15 216    info                Selected FFmpegVideoDecoder for video decoding, config: codec: vp8 format: 1 profile: vp8 coded size: [1280,720] visible rect: [0,0,1280,720] natural size: [1280,720] has extra data? false encryption scheme: Unencrypted rotation: 0°
    00:00:15 216    pipeline_state      kPlaying
    00:00:15 213    duration            unknown
    00:00:16 661    height              720
    00:00:16 661    width               1280
    00:00:16 665    video_buffering_state       BUFFERING_HAVE_ENOUGH
    00:00:16 665    for_suspended_start         false
    00:00:16 665    pipeline_buffering_state    BUFFERING_HAVE_ENOUGH
    00:00:16 667    pipeline_state      kSuspending
    00:00:16 670    pipeline_state      kSuspended
    00:00:52 759    info                Effective playback rate changed from 0 to 1
    00:00:52 759    event               PLAY
    00:00:52 759    pipeline_state      kResuming
    00:00:52 760    video_dds           false
    00:00:52 760    video_decoder       FFmpegVideoDecoder
    00:00:52 760    info                Selected FFmpegVideoDecoder for video decoding, config: codec: vp8 format: 1 profile: vp8 coded size: [1280,720] visible rect: [0,0,1280,720] natural size: [1280,720] has extra data? false encryption scheme: Unencrypted rotation: 0°
    00:00:52 760    pipeline_state      kPlaying
    00:00:52 793    height              720
    00:00:52 793    width               1280
    00:00:52 798    video_buffering_state       BUFFERING_HAVE_ENOUGH
    00:00:52 798    for_suspended_start         false
    00:00:52 798    pipeline_buffering_state    BUFFERING_HAVE_ENOUGH
    00:00:56 278    video_buffering_state       BUFFERING_HAVE_NOTHING
    00:00:56 295    for_suspended_start         false
    00:00:56 295    pipeline_buffering_state    BUFFERING_HAVE_NOTHING
    00:01:20 717    event               PAUSE
    00:01:33 538    event               PLAY
    00:01:35 94     event               PAUSE
    00:01:55 561    pipeline_state      kSuspending
    00:01:55 563    pipeline_state      kSuspended

    Can someone tell me what’s wrong with my code, or dose chrome require some magic configuration to work ?

    Thanks 

    Please excuse my english :)

  • Injecting Metadata into an .mp4 file

    21 août 2019, par TrueCP5

    I’m working on a library that injects metadata into a .mp4 file to allow the video to be displayed correctly as a 360 video. The input file is a standard .mp4 file in the equirectangular format. I know what metadata needs to be injected I just do not know how to inject it.

    I spent some time looking around for libraries that can do this but could only find ones for extracting metadata not injecting/embedding/writing it. The alternative I found was to use Spatial Media as a command line application to inject the metadata more easily. The problem is I know zero python whatsoever so I’m leaning towards a library/nuget package/ffmpeg script.

    Does a good nuget package/library exist that can do this or should I go for the alternative option ?

    Edit 1

    I have tried just pasting in the metadata into the correct place in the file, just in case it might work, but it didn’t.

    Edit 2

    This is the metadata injected by Google’s Spatial Media Tool which is what I am trying to achieve :

    &lt;?xml version="1.0"?>truetrueSpherical Metadata Toolequirectangular`

    Edit 3

    I’ve also tried to do it with ffmpeg like so : ffmpeg -i input.mp4 -movflags use_metadata_tags -metadata Spherical=true -metadata Stitched=true -metadata ProjectionType=equirectangular -metadata StitchingSoftware=StreetviewJourney -codec copy output.mp4

    I think the issue with the ffmpeg method is that it does not contain the rdf:SphericalVideo part which allows the spherical video tags to be used.

    Edit 4

    When I extract the metadata using ffmpeg it contains the spherical tag in the logs but not when I output it to a ffmetadata file. This was the command I used : ffmpeg -i injected.mp4 -map_metadata -1 -f ffmetadata data.txt

    This is the output of the log :

    fps, 60 tbr, 15360 tbn, 120 tbc (default)
       Metadata:
         handler_name    : VideoHandler
       Side data:
         spherical: equirectangular (0.000000/0.000000/0.000000)

    Edit 5

    I also tried to get the metadata using this command : ffprobe -v error -select_streams v:0 -show_streams -of default=noprint_wrappers=1 injected.mp4

    This was the logs it outputted :

    TAG:handler_name=VideoHandler
    side_data_type=Spherical Mapping
    projection=equirectangular
    yaw=0
    pitch=0
    roll=0

    I then tried to use this command but it didn’t work : ffmpeg -i chapmanspeak.mp4 -movflags use_metadata_tags -metadata side_metadata_type="Spherical Mapping" -metadata projection=equirectangular -metadata yaw=0 -metadata pitch=0 -metadata roll=0 -codec copy output.mp4

    Edit 6

    I tried @VC.One’s method but I must be doing something wrong because the output file is unplayable. Here is my code :

           public static void Metadata(string inputFile, string outputFile)
           {
               byte[] metadata = HexStringToByteArray("3C 3F 78 6D 6C 20 76 65 72 73 69 6F 6E 3D 22 31 2E 30 22 3F 3E 3C 72 64 66 3A 53 70 68 65 72 69 63 61 6C 56 69 64 65 6F 0A 78 6D 6C 6E 73 3A 72 64 66 3D 22 68 74 74 70 3A 2F 2F 77 77 77 2E 77 33 2E 6F 72 67 2F 31 39 39 39 2F 30 32 2F 32 32 2D 72 64 66 2D 73 79 6E 74 61 78 2D 6E 73 23 22 0A 78 6D 6C 6E 73 3A 47 53 70 68 65 72 69 63 61 6C 3D 22 68 74 74 70 3A 2F 2F 6E 73 2E 67 6F 6F 67 6C 65 2E 63 6F 6D 2F 76 69 64 65 6F 73 2F 31 2E 30 2F 73 70 68 65 72 69 63 61 6C 2F 22 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 53 70 68 65 72 69 63 61 6C 3E 74 72 75 65 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 53 70 68 65 72 69 63 61 6C 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 65 64 3E 74 72 75 65 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 65 64 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 69 6E 67 53 6F 66 74 77 61 72 65 3E 53 70 68 65 72 69 63 61 6C 20 4D 65 74 61 64 61 74 61 20 54 6F 6F 6C 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 53 74 69 74 63 68 69 6E 67 53 6F 66 74 77 61 72 65 3E 3C 47 53 70 68 65 72 69 63 61 6C 3A 50 72 6F 6A 65 63 74 69 6F 6E 54 79 70 65 3E 65 71 75 69 72 65 63 74 61 6E 67 75 6C 61 72 3C 2F 47 53 70 68 65 72 69 63 61 6C 3A 50 72 6F 6A 65 63 74 69 6F 6E 54 79 70 65 3E 3C 2F 72 64 66 3A 53 70 68 65 72 69 63 61 6C 56 69 64 65 6F 3E");
               byte[] stco = HexStringToByteArray("73 74 63 6F");
               byte[] moov = HexStringToByteArray("6D 6F 6F 76");
               byte[] trak = HexStringToByteArray("74 72 61 6B");

               byte[] file = File.ReadAllBytes(inputFile);

               //find trak
               int trakPosition = 0;
               for (int a = 0; a &lt; file.Length - trak.Length; a++)
               {
                   for (int b = 0; b &lt; trak.Length; b++)
                   {
                       if (file[a + b] != trak[b])
                           break;
                       if (b == trak.Length - 1)
                           trakPosition = a;
                   }
               }
               if (trakPosition == 0)
                   throw new FileLoadException();

               //add metadata
               int trakLength = BitConverter.ToInt32(new ArraySegment<byte>(file, trakPosition - 4, 4).Reverse().ToArray(), 0);
               var fileList = file.ToList();
               fileList.InsertRange(trakPosition - 4 + trakLength, metadata);
               file = fileList.ToArray();

               ////change length - tried this as well
               //byte[] trakBytes = BitConverter.GetBytes(trakLength + metadata.Length).Reverse().ToArray();
               //for (int i = 0; i &lt; 4; i++)
               //    file[trakPosition - 4 + i] = trakBytes[i];

               //find moov
               int moovPosition = 0;
               for (int a = 0; a &lt; file.Length - moov.Length; a++)
               {
                   for (int b = 0; b &lt; moov.Length; b++)
                   {
                       if (file[a + b] != moov[b])
                           break;
                       if (b == moov.Length - 1)
                           moovPosition = a;
                   }
               }
               if (moovPosition == 0)
                   throw new FileLoadException();

               //change length
               int moovLength = BitConverter.ToInt32(new ArraySegment<byte>(file, moovPosition - 4, 4).Reverse().ToArray(), 0);
               byte[] moovBytes = BitConverter.GetBytes(moovLength + metadata.Length).Reverse().ToArray();
               for (int i = 0; i &lt; 4; i++)
                   file[moovPosition - 4 + i] = moovBytes[i];

               //find stco
               int stcoPosition = 0;
               for (int a = 0; a &lt; file.Length - stco.Length; a++)
               {
                   for (int b = 0; b &lt; stco.Length; b++)
                   {
                       if (file[a + b] != stco[b])
                           break;
                       if (b == stco.Length - 1)
                           stcoPosition = a;
                   }
               }
               if (stcoPosition == 0)
                   throw new FileLoadException();

               //modify entries
               int stcoEntries = BitConverter.ToInt32(new ArraySegment<byte>(file, stcoPosition + 8, 4).Reverse().ToArray(), 0);
               for (int a = stcoPosition + 12; a &lt; stcoPosition + 12 + stcoEntries * 4; a += 4)
               {
                   int entryLength = BitConverter.ToInt32(new ArraySegment<byte>(file, a, 4).Reverse().ToArray(), 0);
                   byte[] newEntry = BitConverter.GetBytes(entryLength + metadata.Length).Reverse().ToArray();
                   for (int b = 0; b &lt; 4; b++)
                       file[a + b] = newEntry[b];
               }

               File.WriteAllBytes(outputFile, file);
           }

           private static byte[] HexStringToByteArray(string hex)
           {
               hex = hex.Replace(" ", "");
               return Enumerable.Range(0, hex.Length)
                                .Where(x => x % 2 == 0)
                                .Select(x => Convert.ToByte(hex.Substring(x, 2), 16))
                                .ToArray();
           }
    </byte></byte></byte></byte>

    The bytes are reversed because .mp4s seem to be Little Endian. I tried to also update the length of trak but that didn’t work either.