Recherche avancée

Médias (91)

Autres articles (46)

  • MediaSPIP : Modification des droits de création d’objets et de publication définitive

    11 novembre 2010, par

    Par défaut, MediaSPIP permet de créer 5 types d’objets.
    Toujours par défaut les droits de création et de publication définitive de ces objets sont réservés aux administrateurs, mais ils sont bien entendu configurables par les webmestres.
    Ces droits sont ainsi bloqués pour plusieurs raisons : parce que le fait d’autoriser à publier doit être la volonté du webmestre pas de l’ensemble de la plateforme et donc ne pas être un choix par défaut ; parce qu’avoir un compte peut servir à autre choses également, (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

Sur d’autres sites (8683)

  • How to use ffmpeg to copy streams and preserve format and metadata ?

    24 septembre 2023, par dzeek

    I have two files with multiple streams in each.

    



    Output.mp4 has 2 streams : video and audio (streams 0,1)

    



    Input.mp4 has 3 streams : video, audio and data (streams 0,1,2)

    



    I need to add stream 2 from Input.mp4 to Output.mp4 yielding Final.mov.

    



    I tried using this command but it does seem to preserve the format and metadata for the stream that is added :

    



    ffmpeg -y -i Output.mp4 -i Input.mp4 -c copy -map 0:0 -map 0:1 -map 1:2 Final.mov


    



    The output from ffmpeg seems to show that it is working but the format and metadata for the streams in the Final.mov is not correct.

    



    I would appreciate any help with how to change the command to make it work.

    



    Thank you !

    



    Here is the command output :

    



    ffmpeg -y -i Output.mp4 -i Input.mp4 -c copy -map 0:0 -map 0:1 -map 1:2 Final.mov

ffmpeg version N-84679-gd65b595 Copyright (c) 2000-2017 the FFmpeg developers
  built with gcc 6.3.0 (GCC)
  configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
  libavutil      55. 51.100 / 55. 51.100
  libavcodec     57. 86.103 / 57. 86.103
  libavformat    57. 67.100 / 57. 67.100
  libavdevice    57.  3.101 / 57.  3.101
  libavfilter     6. 78.100 /  6. 78.100
  libswscale      4.  3.101 /  4.  3.101
  libswresample   2.  4.100 /  2.  4.100
  libpostproc    54.  2.100 / 54.  2.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Output.mp4':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42mp41
    creation_time   : 2018-04-15T21:57:13.000000Z
  Duration: 00:24:25.05, start: 0.000000, bitrate: 3571 kb/s
    Stream #0:0(eng): Video: hevc (Main) (hvc1 / 0x31637668), yuvj420p(pc, bt709), 7680x3840 [SAR 1:1 DAR 2:1], 3252 kb/s, 1 fps, 1 tbr, 100k tbn, 2.15 tbc (default)
    Metadata:
      creation_time   : 2018-04-15T21:57:13.000000Z
      handler_name    : Alias Data Handler
      encoder         : HEVC Coding
    Side data:
      stereo3d: 2D
      spherical: equirectangular (0.000000/0.000000/0.000000)
    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 317 kb/s (default)
    Metadata:
      creation_time   : 2018-04-15T21:57:13.000000Z
      handler_name    : Alias Data Handler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'Input.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2mp41
    creation_time   : 2018-04-12T12:02:32.000000Z
    make            : Insta360
    model           : Insta360 Pro
    encoder         : Lavf57.71.100
    description     : {"info":{"gyro_stabilized":false,"initial_view_changed":false}}
  Duration: 00:24:25.00, start: -0.004233, bitrate: 6394 kb/s
    Stream #1:0(und): Video: hevc (Main) (hev1 / 0x31766568), yuv420p(tv), 7680x3840, 6196 kb/s, 1 fps, 1 tbr, 360k tbn, 5 tbc (default)
    Metadata:
      creation_time   : 2018-04-12T12:02:32.000000Z
      handler_name    : VideoHandler
    Side data:
      stereo3d: 2D
      spherical: equirectangular (0.000000/0.000000/0.000000)
    Stream #1:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, 4.0, fltp, 129 kb/s (default)
    Metadata:
      creation_time   : 2018-04-12T12:02:32.000000Z
      handler_name    : SoundHandler
    Stream #1:2(und): Data: none (camm / 0x6D6D6163), 36 kb/s
    Metadata:
      creation_time   : 2018-04-12T12:02:32.000000Z
      handler_name    : CameraMetadataMotionHandler
Output #0, mov, to 'Final.mov':
  Metadata:
    major_brand     : mp42
    minor_version   : 0
    compatible_brands: mp42mp41
    encoder         : Lavf57.67.100
    Stream #0:0(eng): Video: hevc (Main) (hvc1 / 0x31637668), yuvj420p(pc, bt709), 7680x3840 [SAR 1:1 DAR 2:1], q=2-31, 3252 kb/s, 1 fps, 1 tbr, 100k tbn, 100k tbc (default)
    Metadata:
      creation_time   : 2018-04-15T21:57:13.000000Z
      handler_name    : Alias Data Handler
      encoder         : HEVC Coding
    Side data:
      stereo3d: 2D
      spherical: equirectangular (0.000000/0.000000/0.000000)
    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 317 kb/s (default)
    Metadata:
      creation_time   : 2018-04-15T21:57:13.000000Z
      handler_name    : Alias Data Handler
    Stream #0:2(und): Data: none (camm / 0x6D6D6163), 36 kb/s
    Metadata:
      creation_time   : 2018-04-12T12:02:32.000000Z
      handler_name    : CameraMetadataMotionHandler
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
  Stream #1:2 -> #0:2 (copy)
Press [q] to stop, [?] for help
[mov @ 0000000004894f00] Unknown hldr_type for camm / 0x6D6D6163, writing dummy valuesd= 629x
frame= 1465 fps=625 q=-1.0 Lsize=  650825kB time=00:24:24.98 bitrate=3639.3kbits/s speed= 625x
video:581593kB audio:56757kB subtitle:0kB other streams:6602kB global headers:0kB muxing overhead: 0.910633%


    



    Here are the streams in the Final.mp4 file :

    



    ffmpeg -i Final.mov

ffmpeg version N-84679-gd65b595 Copyright (c) 2000-2017 the FFmpeg developers
  built with gcc 6.3.0 (GCC)
  configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
  libavutil      55. 51.100 / 55. 51.100
  libavcodec     57. 86.103 / 57. 86.103
  libavformat    57. 67.100 / 57. 67.100
  libavdevice    57.  3.101 / 57.  3.101
  libavfilter     6. 78.100 /  6. 78.100
  libswscale      4.  3.101 /  4.  3.101
  libswresample   2.  4.100 /  2.  4.100
  libpostproc    54.  2.100 / 54.  2.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0000000002726860] overread end of atom 'stsd' by 2974416 bytes
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Final.mov':
  Metadata:
    major_brand     : qt
    minor_version   : 512
    compatible_brands: qt
    encoder         : Lavf57.67.100
  Duration: 00:24:25.00, start: 0.000000, bitrate: 3639 kb/s
    Stream #0:0(eng): Video: hevc (Main) (hvc1 / 0x31637668), yuvj420p(pc, bt709), 7680x3840 [SAR 1:1 DAR 2:1], 3252 kb/s, 1 fps, 1 tbr, 100k tbn, 2.15 tbc (default)
    Metadata:
      handler_name    : DataHandler
      encoder         : HEVC Coding
    Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 317 kb/s (default)
    Metadata:
      handler_name    : DataHandler
    Stream #0:2(eng): Data: none (stts / 0x73747473), 36 kb/s
    Metadata:
      handler_name    : DataHandler


    


  • How can I mux H.264 RTP output into a container using FFMPEG ?

    23 septembre 2013, par Grad

    I am working on the effects of network losses in video transmission. In order to simulate the network losses I use a simple program which drops random RTP packets from the output of H.264 RTP encoding.

    I use Joint Model (JM) 14.2 in order to encode the video. However, I don't use AnnexB format as my output, instead I choose the output as RTP packets. The JM output is generated as RTP packets with RTP headers and payload as a sequence. After that, some of RTP packets are dropped by using a simple program. Then, I decode the output by using also JM and it's error concealment methods. That gives me a YUV file as output. The format of the output is as follows :

        ----------------------------------------------------------------------
        | RTP Header #1 | RTP Payload #1 | RTP Header #2 | RTP Payload #2 |...
        ----------------------------------------------------------------------

    I want to make a subjective test with these bitstreams and it's very inconvenient to crowdsource this subjective test with GBs of video data. So, I want to mux these bitstreams into a container (i.e. AVI) by using FFMPEG. I have tried to decode these bitstreams with FFMPEG and FFPLAY ; however, both of them didn't work. I also tried the following command and it didn't work, either.

       ffmpeg - f h264 -i  -vcodec copy -r 25 out.avi

    Which format or muxer should I use ? Do I need to convert these files to any other format ?

  • ffmpegthumbnailer error with carrierwave-video-thumbnailer

    30 janvier 2014, par scientiffic

    I am getting the error "No such file or directory" when I try to run ffmpegthumbnailer using the carrierwave-video-thumbnailer gem.

    I confirmed that ffmpegthumbnailer is working correctly on my computer since I can generate a thumbnail image from a video straight from the command line.

    From my logs, it looks like my app thinks that it has generated a thumbnail image. However, when I look in the directory, there is no file tmpfile.png, and my app fails with the error.

    Has anyone successfully used the carrierewave-video-thumbnailer gem to create thumbnails, and if so, what am I doing wrong ? Alternatively, if there is some way I can just run ffmpegthumbnailer within my model, I could do that too.

    Here are my logs :

    Running....ffmpegthumbnailer -i /Users/.../Website/public/uploads/tmp/1380315873-21590-2814/thumb_Untitled.mov -o /Users/.../Website/public/uploads/tmp/1380315873-21590-2814/tmpfile.png -c png -q 10 -s 192 -f
    Success!
    Errno::ENOENT: No such file or directory - (/Users/.../Website/public/uploads/tmp/1380315873-21590-2814/tmpfile.png, /Users/.../Website/public/uploads/tmp/1380315873-21590-2814/thumb_Untitled.mov)

    video_path_uploader.rb

    class VideoPathUploader < CarrierWave::Uploader::Base
     include CarrierWave::Video
     include CarrierWave::Video::Thumbnailer

     process encode_video: [:mp4]

     # Include RMagick or MiniMagick support:
     # include CarrierWave::RMagick
     include CarrierWave::MiniMagick

     # Choose what kind of storage to use for this uploader:
     # storage :file
     storage :fog

     # Override the directory where uploaded files will be stored.
     # This is a sensible default for uploaders that are meant to be mounted:
     def store_dir
       "#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"
     end

      version :thumb do
         process thumbnail: [{format: 'png', quality: 10, size: 192, strip: true, logger: Rails.logger}]
         def full_filename for_file
           png_name for_file, version_name
         end
     end

       def png_name for_file, version_name
         %Q{#{version_name}_#{for_file.chomp(File.extname(for_file))}.png}
       end

    end

    Video.rb

    class Video < ActiveRecord::Base
     # maybe we should add a title attribute to the video?
     attr_accessible :position, :project_id, :step_id, :image_id, :saved, :embed_url, :thumbnail_url, :video_path
     mount_uploader :video_path, VideoPathUploader
    ...
    end