Recherche avancée

Médias (3)

Mot : - Tags -/image

Autres articles (53)

  • La sauvegarde automatique de canaux SPIP

    1er avril 2010, par

    Dans le cadre de la mise en place d’une plateforme ouverte, il est important pour les hébergeurs de pouvoir disposer de sauvegardes assez régulières pour parer à tout problème éventuel.
    Pour réaliser cette tâche on se base sur deux plugins SPIP : Saveauto qui permet une sauvegarde régulière de la base de donnée sous la forme d’un dump mysql (utilisable dans phpmyadmin) mes_fichiers_2 qui permet de réaliser une archive au format zip des données importantes du site (les documents, les éléments (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

  • Publier sur MédiaSpip

    13 juin 2013

    Puis-je poster des contenus à partir d’une tablette Ipad ?
    Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir

Sur d’autres sites (10574)

  • Using ffmpeg with Flash Media Server and HDS

    20 avril 2012, par Jonathan

    I want to use ffmpeg to encode and publish a live stream to Flash Media Server. In order to support iOS devices, I need to implement HTTP Live Streaming as well. The video needs to be in H.264 format and the audio should be AAC. I don't have much experience working with ffmpeg, and I'm having a hard time getting this to work. This is the command that I've tried (and some variations as well) :

    ffmpeg.exe -threads 15 -f dshow -i video="USB2.0 UVC WebCam":audio="Microphone (Realtek High Defini" \
         -map_channel 0.1.1 -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 \
         -s vga -vb 100k -f flv "rtmp:///livepkgr/livestream1?adbe-live-event=liveevent" \
         -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 -s qvga -vb 200k \
         -f flv "rtmp:///livepkgr/livestream2?adbe-live-event=liveevent" \
         -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 -s vga -vb 350k
         -f flv "rtmp:///livepkgr/livestream3?adbe-live-event=liveevent"

    When I run this, it appears to connect to FMS, but then I get a lot of error messages about dropped frames - I'm not sure if ANY frames get encoded successfully. My CPU usage is very high as well. I get a 404 error from FMS when I enter the URL of the *.m3u8 file for one of the individual streams (the main livestream.m3u8 file is accessible though). I have also tried outputting to a file instead of FMS, with no success. All I get is some very garbled sound and no video.

    Any suggestions for what options/commands I should use to get this working ? Is anyone using ffmpeg with FMS to do HTTP Dynamic Streaming / HLS with MP4 video ? I've been struggling to get HDS/HLS working for some time now, and any help would be much appreciated ! It shouldn't make a difference, but I'm using FMS on Amazon EC2 with their AMI image.

    Thanks !

  • Using ffmpeg with Flash Media Server and HDS

    20 avril 2012, par Jonathan

    I want to use ffmpeg to encode and publish a live stream to Flash Media Server. In order to support iOS devices, I need to implement HTTP Live Streaming as well. The video needs to be in H.264 format and the audio should be AAC. I don't have much experience working with ffmpeg, and I'm having a hard time getting this to work. This is the command that I've tried (and some variations as well) :

    ffmpeg.exe -threads 15 -f dshow -i video="USB2.0 UVC WebCam":audio="Microphone (Realtek High Defini" \
         -map_channel 0.1.1 -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 \
         -s vga -vb 100k -f flv "rtmp:///livepkgr/livestream1?adbe-live-event=liveevent" \
         -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 -s qvga -vb 200k \
         -f flv "rtmp:///livepkgr/livestream2?adbe-live-event=liveevent" \
         -r 24 -acodec libvo_aacenc -ar 22050 -ab 128k -vcodec libx264 -s vga -vb 350k
         -f flv "rtmp:///livepkgr/livestream3?adbe-live-event=liveevent"

    When I run this, it appears to connect to FMS, but then I get a lot of error messages about dropped frames - I'm not sure if ANY frames get encoded successfully. My CPU usage is very high as well. I get a 404 error from FMS when I enter the URL of the *.m3u8 file for one of the individual streams (the main livestream.m3u8 file is accessible though). I have also tried outputting to a file instead of FMS, with no success. All I get is some very garbled sound and no video.

    Any suggestions for what options/commands I should use to get this working ? Is anyone using ffmpeg with FMS to do HTTP Dynamic Streaming / HLS with MP4 video ? I've been struggling to get HDS/HLS working for some time now, and any help would be much appreciated ! It shouldn't make a difference, but I'm using FMS on Amazon EC2 with their AMI image.

    Thanks !

  • Download Partial Video via HTTP (for Remote Thumbnailing)

    13 février 2012, par HuntedC

    I have videos hosted on Amazon S3. I encode them with Zencoder and store a thumbnail for the video then using Zencoder. However, I need a way to generate thumbnails at certain points in the video (i.e. 00:00:03, 00:10:32, 01:40:18) and store them either on S3 or my server.

    ffmpeg allows remote thumbnailing, however it takes a very long time (sometimes several minutes) to get a thumbnail from the middle of a file—I believe this is because it downloads the entire file up to that point to get the thumbnail.

    My plan is to somehow download the header of the video file via HTTP byte-range request, guesstimate the byte range where I should be looking for the thumbnail, download about a second of video from that part of the file via HTTP byte-range request, then save the header and tiny video locally. I pull the thumbnail from that using ffmpeg and delete the temporary video.

    I have no idea on how exactly this would work (I believe the H.264 MP4 files I'm working with have a dynamic length header, for another issue). Any suggestions or better ideas ?

    Edit : To clarify, Zencoder thumbnailing is great, but they only allow thumbnail creation in combination with transcoding. I don't want to transcode my video every time I create a new thumbnail, so I need to do this on my own without Zencoder.