Recherche avancée

Médias (0)

Mot : - Tags -/presse-papier

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (6)

  • Les vidéos

    21 avril 2011, par

    Comme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
    Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
    Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...)

  • Les tâches Cron régulières de la ferme

    1er décembre 2010, par

    La gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
    Le super Cron (gestion_mutu_super_cron)
    Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (3172)

  • Using ffmpeg with Flash Media Server and HDS

    20 avril 2012, par Jonathan

    I want to use ffmpeg to encode and publish a live stream to Flash Media Server. In order to support iOS devices, I need to implement HTTP Live Streaming as well. The video needs to be in H.264 format and the audio should be AAC. I don't have much experience working with ffmpeg, and I'm having a hard time getting this to work. This is the command that I've tried (and some variations as well) :

    ffmpeg.exe -threads 15 -f dshow -i video="USB2.0 UVC WebCam":audio="Microphone (Realtek High Defini" \
         -map_channel 0.1.1 -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 \
         -s vga -vb 100k -f flv "rtmp:///livepkgr/livestream1?adbe-live-event=liveevent" \
         -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 -s qvga -vb 200k \
         -f flv "rtmp:///livepkgr/livestream2?adbe-live-event=liveevent" \
         -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 -s vga -vb 350k
         -f flv "rtmp:///livepkgr/livestream3?adbe-live-event=liveevent"

    When I run this, it appears to connect to FMS, but then I get a lot of error messages about dropped frames - I'm not sure if ANY frames get encoded successfully. My CPU usage is very high as well. I get a 404 error from FMS when I enter the URL of the *.m3u8 file for one of the individual streams (the main livestream.m3u8 file is accessible though). I have also tried outputting to a file instead of FMS, with no success. All I get is some very garbled sound and no video.

    Any suggestions for what options/commands I should use to get this working ? Is anyone using ffmpeg with FMS to do HTTP Dynamic Streaming / HLS with MP4 video ? I've been struggling to get HDS/HLS working for some time now, and any help would be much appreciated ! It shouldn't make a difference, but I'm using FMS on Amazon EC2 with their AMI image.

    Thanks !

  • Using ffmpeg with Flash Media Server and HDS

    20 avril 2012, par Jonathan

    I want to use ffmpeg to encode and publish a live stream to Flash Media Server. In order to support iOS devices, I need to implement HTTP Live Streaming as well. The video needs to be in H.264 format and the audio should be AAC. I don't have much experience working with ffmpeg, and I'm having a hard time getting this to work. This is the command that I've tried (and some variations as well) :

    ffmpeg.exe -threads 15 -f dshow -i video="USB2.0 UVC WebCam":audio="Microphone (Realtek High Defini" \
         -map_channel 0.1.1 -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 \
         -s vga -vb 100k -f flv "rtmp:///livepkgr/livestream1?adbe-live-event=liveevent" \
         -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 -s qvga -vb 200k \
         -f flv "rtmp:///livepkgr/livestream2?adbe-live-event=liveevent" \
         -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 -s vga -vb 350k
         -f flv "rtmp:///livepkgr/livestream3?adbe-live-event=liveevent"

    When I run this, it appears to connect to FMS, but then I get a lot of error messages about dropped frames - I'm not sure if ANY frames get encoded successfully. My CPU usage is very high as well. I get a 404 error from FMS when I enter the URL of the *.m3u8 file for one of the individual streams (the main livestream.m3u8 file is accessible though). I have also tried outputting to a file instead of FMS, with no success. All I get is some very garbled sound and no video.

    Any suggestions for what options/commands I should use to get this working ? Is anyone using ffmpeg with FMS to do HTTP Dynamic Streaming / HLS with MP4 video ? I've been struggling to get HDS/HLS working for some time now, and any help would be much appreciated ! It shouldn't make a difference, but I'm using FMS on Amazon EC2 with their AMI image.

    Thanks !

  • Download Partial Video via HTTP (for Remote Thumbnailing)

    13 février 2012, par HuntedC

    I have videos hosted on Amazon S3. I encode them with Zencoder and store a thumbnail for the video then using Zencoder. However, I need a way to generate thumbnails at certain points in the video (i.e. 00:00:03, 00:10:32, 01:40:18) and store them either on S3 or my server.

    ffmpeg allows remote thumbnailing, however it takes a very long time (sometimes several minutes) to get a thumbnail from the middle of a file—I believe this is because it downloads the entire file up to that point to get the thumbnail.

    My plan is to somehow download the header of the video file via HTTP byte-range request, guesstimate the byte range where I should be looking for the thumbnail, download about a second of video from that part of the file via HTTP byte-range request, then save the header and tiny video locally. I pull the thumbnail from that using ffmpeg and delete the temporary video.

    I have no idea on how exactly this would work (I believe the H.264 MP4 files I'm working with have a dynamic length header, for another issue). Any suggestions or better ideas ?

    Edit : To clarify, Zencoder thumbnailing is great, but they only allow thumbnail creation in combination with transcoding. I don't want to transcode my video every time I create a new thumbnail, so I need to do this on my own without Zencoder.