Recherche avancée

Médias (1)

Mot : - Tags -/Rennes

Autres articles (27)

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

  • De l’upload à la vidéo finale [version standalone]

    31 janvier 2010, par

    Le chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
    Upload et récupération d’informations de la vidéo source
    Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
    Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)

Sur d’autres sites (2048)

  • Any equivalent to '-c:v copy' for changing container of webm to mp4 video client-side in JS ?

    15 mars 2020, par programmingisphun

    Exploring the Mediarecorder API and really hope to get an mp4 output instead of webm. Prefer to stay client-side to reduce server resources and ffmpeg.js at 17mb doesn’t seem viable for online use.

    Discovered that one can create (Chrome/Firefox) a webm/h.264, which converts without re-encoding to an mp4 using FFMPEG. The resulting file opens fine in Quicktime MacOS (snippet source) :
    ffmpeg -i _inputfile_.webm -c:v copy _outputfile_.mp4

    Whereas this clever hack works to display the video maybe as an mp4 in the browser, but the downloaded file won’t open in Quicktime (just VLC, same for webm) :

    new Blob(recordedBlobs, {type: 'video/mp4'});
    video.src = window.URL.createObjectURL(superBuffer);

    Therefore, wondering if there’s another JS hack to do what FFMPEG’s -c:v copy does, but to the blob bytes of the MediaRecorder’s recordedBlobs and give it the same mp4 container that’s accepted by Quicktime/etc ?

    Pastebin of current setup

  • ffmpeg - How to change the filter-paramaters depending on time oder framenumber ?

    2 mars 2020, par LookAndSee

    Hallo to all userse and helpers here in this forum ! Thank you, i am new and i have found allready a lot of solutions.

    Now I want to ask, if someone can help me :

    I have a Movie about 30 seconds made of 1 image.
    Now I want to pixelate depending on time or framenumber - every time a litlle bit less.
    My code so far :

    ffmpeg -i in.mp4 -vf scale=iw/n:ih/n,scale=niw:nih:flags=neighbor out.mp4

    where n should be the framenumber 1 to 900.
    this scould also be t+1 for slower change.

    the stars are gone - so i mean n times iw:n times ih :

    error-massage :

    undefined constant or missing ’(’ in ’n’
    error when evaluating the expression ’ih/n’
    maybe the expression for out_w :’w/n’ or for out_h :’ih/n’ is self-referencing.
    failed to configure output pad on paresed_scale_0
    error reinitializing filters !
    failed to inject frame into filter network : invalid argument
    error while processing the decoded data for stream #0:0

    Do you have some suggestion plaese - Thank you in Advance

  • Fragmented mp4 broadcast with avformat

    6 mars 2020, par Daniel

    I want to broadcast a live stream (h264 -> fragmented mp4) with avformatfunctions.

    Now basically I have a working example with this pseudo :

    avformat_write_header
    while (!end) {
     av_read_frame
     av_write_frame
    }
    avformat_write_tailer

    I have this as output flags : movflags +frag_keyframe+empty_moov+default_base_moof.

    I’m dispatching the output to a html5 video which plays fine and smooth.
    However I want to broadcast it, so whenever a new player comes into picture I want to stream the very same data to it :

    avformat_write_header
    while (!end) {
     av_read_frame
     av_write_frame (to multiple clients)
    }
    avformat_write_tailer

    However the first client can play the stream but the rest of them can not.

    This is because the fmp4’s structure : the first client receives the correct ftyp and moov atoms, whereas the rest of the clients are not.

    Now I can hack my code to provide the earlier ftype and moov atoms to the later clients but I think that is not wise - correct me if I’m wrong.

    My quesions :

    1. How can I ask avformat during the stream to (calculate) and include the ftyp and moov atoms ?
    2. Will including ftyp and moov during the stream ruin the first client ? (Is it valid in fmp4 to have ftype and moov not just once in the beginning ?)

    Thank you