
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (57)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (9544)
-
use java-ffmpeg wrapper, or simply use java runtime to execute ffmpeg ?
12 mars 2017, par user156153I’m pretty new to Java, need to write a program that listen to video conversion instructions and convert the video once an new instruction arrives (instructions is stored in Amazon SQS, but it’s irrelevant to my question)
I’m facing a choice, either use Java RunTime to exec ’ffmpeg’ conversion (like from command line), or I can use a ffmpeg wrapper written inJava http://fmj-sf.net/ffmpeg-java/getting_started.php
I’d much prefer using Java Runtime to exec ffmpeg directly, and avoid using java-ffmpeg wrapper as I have to learn the library. so my question is are there any benefits using java-ffmpeg wrapper over exec ffmpeg directly using Runtime ? I don’t need ffmpeg to play videos, just convert videos
Thanks
-
Store converted video in a buffer to transfer to S3
24 mai 2012, par ChrisI have a system set up to upload an image, take that temporarily uploaded file, convert it to a resized jpeg, read the buffer of that conversion and send the buffer to amazon S3 for storage as an image. It works wonderfully because no permanent file is stored on the my server, everything is on S3.
Now I am attempting to add this same functionality but with video. The process goes through, but the resulting files stored on Amazons S3 servers are 1.5kb a piece, instead of multimb videos.
My code is as follows :
public function transfer($method, $file, $bucketName, $filename, $contentType){
switch($method){
case 'file':
if($this->s3->putObject($this->s3->inputFile($file, false), $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
case 'string':
if ($this->s3->putObject($file, $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
case 'resource':
if($this->s3->putObject($this->s3->inputResource(fopen($file, 'rb'), filesize($file)), $bucketName, $filename, S3::ACL_PUBLIC_READ, array(), $contentType))
return true;
else
return false;
break;
}
return false;
}
/**
* AmazonS3Handler - convert()
*/
public function convert($file, $type)
{
$ffmpeg = "/usr/bin/ffmpeg";
$cmd['webm'] = $ffmpeg. " -i ". $file ." -vcodec libvpx -acodec libvorbis -ac 2 -f webm -g 30 2>&1";
$cmd['ogv'] = $ffmpeg. " -i ". $file ." -vcodec libtheora -acodec libvorbis -ac 2 2>&1";
$cmd['mp4'] = $ffmpeg. " -i ". $file ." -vcodec libx264 -acodec libfaac -ac 2 2>&1";
$cmd['jpg'] = $ffmpeg. " -i ". $file ." -vframes 30";
ob_start();
passthru($cmd[$type]);
$fileContents = ob_get_contents();
ob_end_clean();
return $fileContents;
}From my understanding, passthru should return raw output which could be picked up by the output buffer.
Am I doing something wrong ? Is there a better way to convert a video on my servers but keep the data on S3 ?
Thanks !
EDIT : I've boiled it down to the executing of the command. FFMPEG wasn't being located, so I changed the path to "/usr/bin/ffmpeg", no more error 127, now when I run exec() (not passthru) I get error 1
EDIT2 : Here is the output of running the script :
Array
(
[0] => ffmpeg version 0.7.3-4:0.7.3-0ubuntu0.11.10.1, Copyright (c) 2000-2011 the Libav developers
[1] => built on Jan 4 2012 16:08:51 with gcc 4.6.1
[2] => configuration: --extra-version='4:0.7.3-0ubuntu0.11.10.1' --arch=amd64 --prefix=/usr --enable-vdpau --enable-bzlib --enable-libgsm --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvorbis --enable-pthreads --enable-zlib --enable-libvpx --enable-runtime-cpudetect --enable-vaapi --enable-gpl --enable-postproc --enable-swscale --enable-x11grab --enable-libdc1394 --enable-shared --disable-static
[3] => libavutil 51. 7. 0 / 51. 7. 0
[4] => libavcodec 53. 6. 0 / 53. 6. 0
[5] => libavformat 53. 3. 0 / 53. 3. 0
[6] => libavdevice 53. 0. 0 / 53. 0. 0
[7] => libavfilter 2. 4. 0 / 2. 4. 0
[8] => libswscale 2. 0. 0 / 2. 0. 0
[9] => libpostproc 52. 0. 0 / 52. 0. 0
[10] =>
[11] => Seems stream 1 codec frame rate differs from container frame rate: 2000.00 (2000/1) -> 30.30 (1000/33)
[12] => Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'iv.m4v':
[13] => Metadata:
[14] => major_brand : M4V
[15] => minor_version : 1
[16] => compatible_brands: M4V M4A mp42isom
[17] => creation_time : 2012-01-23 04:00:18
[18] => encoder : Mac OS X v10.7.2 (CMA 889, CM 705.42, x86_64)
[19] => Duration: 00:00:11.70, start: 0.000000, bitrate: 345 kb/s
[20] => Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16, 125 kb/s
[21] => Metadata:
[22] => creation_time : 2012-01-23 04:00:18
[23] => Stream #0.1(und): Video: h264 (Main), yuv420p, 640x360 [PAR 1:1 DAR 16:9], 206 kb/s, 30.30 fps, 30.30 tbr, 1k tbn, 2k tbc
[24] => Metadata:
[25] => creation_time : 2012-01-23 04:00:18
[26] => iv.webm: Permission denied
)It looks like the permissions aren't set right, however the file has chmod 777 permissions. What other permissions need to get changed ?
-
ffmpeg - Record Server Desktop Without Connection
2 septembre 2015, par Chris.I set up an application which uses ffmpeg to record a desktop on an Amazon AWS EC2 instance having Windows Server 2012 R2 installed. It records the desktop and puts the result into a file.
This works as long as a Remote Desktop or TeamViewer connection is active for that particular Amazon AWS EC2 instance. As soon as I close the Remote Desktop and TeamViewer connection the recording stops and continues as soon as I reconnect.
I assume that it’s because the GPU doesn’t deliver frames without a display in use.
How can I make sure that frames are constantly being rendered so that I can record them ?