Recherche avancée

Médias (0)

Mot : - Tags -/formulaire

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (53)

  • Des sites réalisés avec MediaSPIP

    2 mai 2011, par

    Cette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
    Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.

  • Support audio et vidéo HTML5

    10 avril 2011

    MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
    Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
    Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
    Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)

  • HTML5 audio and video support

    13 avril 2011, par

    MediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
    The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
    For older browsers the Flowplayer flash fallback is used.
    MediaSPIP allows for media playback on major mobile platforms with the above (...)

Sur d’autres sites (7686)

  • Seek in fragmented MP4

    15 novembre 2020, par Stefan Falk

    For my web-client I want the user to be able to play a track right away, without having to download the entire file. For this I am using a fragmented MP4 with the AAC audio coded (Mime-Type : audio/mp4; codecs="mp4a.40.2").

    


    This is the command that is being used in order to convert an input file to a fMP4 :

    


    ffmpeg -i /tmp/input.any \
  -f mp4 \
  -movflags faststart+separate_moof+empty_moov+default_base_moof \
  -acodec aac -b:a 256000 \
  -frag_duration 500K \
   /tmp/output.mp4


    


    If I look at this file on MP4Box.js, I see that the file is fragmented like this :

    


    ftyp
moov
moof
mdat
moof
mdat
..
moof
mdat
mfra


    


    This looks alright so far but the problem I am facing now is that it's not apparent to me how to start loading data from a specific timestamp without introducing an additional overhead. What I mean by this is that I need the exact byte offset of the first [moof][mdat] for a specific timestamp without the entire file being available.

    


    Let's say I have a file that looks like this :

    


    ftyp
moov
moof # 00:00
mdat 
moof # 00:01
mdat
moof # 00:02
mdat
moof # 00:03
mdat
mfra


    


    This file however, is not available on my server directly, it is being loaded from another service, but the client wants to request packets starting at 00:02.

    


    Is there a way to do this efficiently without me having to load the entire file from the other service to my server ?

    


    My guess would be to load [ftyp][moov] (or store at least this part on my own server) but as far as I know, the metadata stored in those boxes won't help me to find a byte-offset to the first [moof][mdat]-pair.

    


    Is this even possible or am I following the wrong approach here ?

    


  • Building ffmpeg iOS libraries for armv7, armv7s, arm64, i386 and universal

    4 février 2015, par sandy

    I have seen several scripts to build FFmpeg API for armv7, 7s and i386 but couldn’t find anything which would work for armv64 as well. Some of the answers on other threads of this forum suggested to prepare a separate library for arm64 but it does not work well with rest of the architectures. Hence I need a script which can work for all the supported architectures for iOS including armv7, armv7s, armv64 and i386.

  • ffmpeg convert variable framerate .webm to constant framerate video

    4 novembre 2019, par Dashadower

    I have a .webm file of a recording of a game at 16fps. However, upon trying to process the video with OpenCV, it seems the video is recorded with a variable framerate, so when I try to use OpenCV to get a frame every second by getting the every 16th frame, it won’t work since the video stream will end prematurely.

    Therefore, I’m trying to convert a variable-frame .webm video, which claims it has a framerate of 16 fps, to a video with a constant frame, so I can extract one frame for every second. I’ve tried the following ffmpeg command from https://ffmpeg.zeranoe.com/forum/viewtopic.php?t=5518 :

    ffmpeg -i input.webm -c:v copy -b:v copy -r 16 output.webm

    However, the following error will occur :

    [NULL @ 00000272ccbc0c40] [Eval @ 000000bc11bfe2f0] Undefined constant or missing '(' in 'copy'
    [NULL @ 00000272ccbc0c40] Unable to parse option value "copy"
    [NULL @ 00000272ccbc0c40] Error setting option b to value copy.

    Error setting up codec context options.

    Here’s is the code I’m trying to use to process a frame every second :

    video = cv2.VideoCapture(test_mp4_vod_path)
    print("Opened ", test_mp4_vod_path)
    print("Processing MP4 frame by frame")

    # forward over to the frames you want to start reading from.
    # manually set this, fps * time in seconds you wanna start from
    video.set(1, 0)
    success, frame = video.read()
    #fps = int(video.get(cv2.CAP_PROP_FPS))  # this will return 0!
    fps = 16  # hardcode fps
    total_frame_count = int(video.get(cv2.CAP_PROP_FRAME_COUNT))
    print("Loading video %d seconds long with FPS %d and total frame count %d " % (total_frame_count/fps, fps, total_frame_count))

    count = 1
    while video.isOpened():
       success, frame = video.read()
       if not success:
           break

       if count % fps == 0:
           print("%dth frame is %d seconds on video"%(count, count/fps))
       count += 1

    The code will finish before it gets near the end of the video, since the video isn’t at a constant FPS.
    How can I convert a variable-FPS video to a constant FPS video ?