
Recherche avancée
Médias (1)
-
Carte de Schillerkiez
13 mai 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (30)
-
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 is the first MediaSPIP stable release.
Its official release date is June 21, 2013 and is announced here.
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Librairies et binaires spécifiques au traitement vidéo et sonore
31 janvier 2010, parLes logiciels et librairies suivantes sont utilisées par SPIPmotion d’une manière ou d’une autre.
Binaires obligatoires FFMpeg : encodeur principal, permet de transcoder presque tous les types de fichiers vidéo et sonores dans les formats lisibles sur Internet. CF ce tutoriel pour son installation ; Oggz-tools : outils d’inspection de fichiers ogg ; Mediainfo : récupération d’informations depuis la plupart des formats vidéos et sonores ;
Binaires complémentaires et facultatifs flvtool2 : (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (4791)
-
Trying to transcode video with FFMpeg Layer on Lambda
7 septembre 2020, par incovenantI am trying to convert
.ogv
files to.mp4
using anffmpeg
layer on AWS Lambda.




I followed a tutorial from the people at Serverless Framework to convert
.mp4
's toGIF
's and that worked out great. Using the same ffmpeg static build, ( ffmpeg-git-amd64-static.tar.xz ) I set out to convert.ogv
files to.mp4
files.


So far I have had success with uploading videos to an S3 Bucket, getting a Lambda to retrieve that video, do something to the video using the
ffmpeg
binary, and copy a new file to S3.




The Problem :



The videos that are created, will not play.



data point 1 : the resultant files from the function are far too small.



The input video file is 1.3MB and the output video is only 256.0KB



data point 2 : moov atom not found.



After copying the resultant video from S3 to my local machine, I try to play using
ffplay
and I receive this error :


[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fd613093400] moov atom not found
frank.mp4: Invalid data found when processing input




As far as I have been able to tell, the moov atom is suppose to contain important metadata about
.mp4
files.




Implementation :



I used the Serverless framework to set up the AWS infrastructure.



Here are a few different
ffmpeg
commands I have tried :


1st attempt :



// convert to mp4!
 spawnSync(
 "/opt/ffmpeg/ffmpeg",
 [
 "-i",
 `/tmp/${record.s3.object.key}`,
 "-vcodec",
 "libx264",
 "-acodec",
 "aac",
 `/tmp/${record.s3.object.key}.mp4`
 ],
 { stdio: "inherit" }
 );




2nd attempt :



// convert to mp4!
 spawnSync(
 "/opt/ffmpeg/ffmpeg",
 [
 "-i",
 `/tmp/${record.s3.object.key}`,
 `/tmp/${record.s3.object.key}.mp4`
 ],
 { stdio: "inherit" }
 );




3rd attempt :



I found this approach in a Stack Overflow question and the poster said that it worked for him.



// convert to mp4!
spawnSync(
 "/opt/ffmpeg/ffmpeg",
 [
 '-i',
 `/tmp/${record.s3.object.key}`,
 '-codec:v',
 'libx264',
 '-profile:v',
 'main',
 '-preset',
 'slow',
 '-b:v',
 '400k',
 '-maxrate',
 '400k',
 '-bufsize',
 '800k',
 '-threads',
 '0',
 '-b:a',
 '128k',
 `/tmp/${record.s3.object.key}.mp4`
 ],
 { stdio: "inherit" }
);




Each one of these works swell on my local machine.



If the ffmpeg binary that I am using was not a popular one, ( I have seen it on multiple sites dealing in connection with transcoding on Lambda ), my guess would be that it is an issue with the layer... Perhaps.



Any insight would be greatly appreciated. Thank you.


-
how to allow a worker to run a ffmpeg command on heroku for my python/django app ?
10 mars 2013, par GetItDoneI've been stuck trying to figure this out for weeks. I previously asked a similar question found here but I never got any replies. I really cannot find any good documentation anywhere. All I need to do is use a worker (don't care what worker have django-celery and rq installed) to convert a file to flv when it is uploaded from a form. I was able to get this done easily locally, but after over a week I haven't been able to get it to work no matter what I have tried. I tried adding a tasks.py file for celery, or a worker.py file for rq, and I have no idea what else (if anything) needs to be done, such as in my settings.py or Procfile. My procfile looks like :
web: gunicorn lftv.wsgi -b 0.0.0.0:$PORT
celeryd: celery -A tasks worker --loglevel=info
worker: python worker.pyMy requirements.txt showing what I have installed looks like this :
Django==1.4.3
Logbook==0.4.1
amqp==1.0.6
anyjson==0.3.3
billiard==2.7.3.19
boto==2.6.0
celery==3.0.13
celery-with-redis==3.0
distribute==0.6.31
dj-database-url==0.2.1
django-celery==3.0.11
django-s3-folder-storage==0.1
django-storages==1.1.6
gunicorn==0.16.1
kombu==2.5.4
pil==1.1.7
psycopg2==2.4.5
python-dateutil==1.5
pytz==2012j
redis==2.7.2
requests==1.1.0
rq==0.3.2
six==1.2.0
times==0.6The only thing relevant in my settings.py are as follows :
BROKER_BACKEND = 'django'
BROKER_URL = #For this I copy/pasted the code from my redistogo add-on from heroku. Not sure if correct
BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 1800}Without trying to take up too much more space, my tasks.py looks like this :
import subprocess
@task
def ffmpeg_conversion(input_file):
converted_file = subprocess.call(input_file)
return converted_fileI use S3 to store my static and media files, and the upload works (adding uploads to my bucket), however no matter what I try the conversion never will. Is there a good tutorial for absolute beginners ? I followed the heroku redis tutorial, celery docs, rq docs, and whatever else I can find, and got the examples to work, but the worker will not execute the command from my view. For example one of the many things I tried :
...
ffmpeg = "ffmpeg -i %s -acodec mp3 -ar 22050 -f flv -s 320x240 %s" % (sourcefile, targetfile)
ffmpegresult = ffmpeg_conversion.delay(ffmpeg)
...or using rq
...
q = Queue(connection=conn)
result = q.enqueue(ffmpeg_conversion, ffmpeg)
...I seems like it should be simple, however I am completely self-taught and have never deployed a project whatsoever, and there just doesn't seem to be any good documentation or tutorial available for what I am trying to do. I can't judge whether I am completely off and completely missing something significant or relatively close to getting this to work. I really do appreciate any input whatsoever, this is driving me nuts. Thanks in advance.
-
FFMPEG Encoding a video from a Readable stream
4 novembre 2022, par Michael AubryI'm facing an issue with the
seeked
event in Chrome. The issue seems to be due to how the video being seeked is encoded.

The problem seems to occur most frequently when using
ytdl-core
and piping a Readable stream into an FFMPEG child process.

let videoStream: Readable = ytdl.downloadFromInfo(info, {
 ...options,
 quality: "highestvideo"
});



With
ytdl-core
in order to get the highest quality you must combine the audio and video. So here is how I am doing it.

const ytmux = (link, options: any = {}) => {
 const result = new stream.PassThrough({
 highWaterMark: options.highWaterMark || 1024 * 512
 });

 ytdl.getInfo(link, options).then((info: videoInfo) => {
 let audioStream: Readable = ytdl.downloadFromInfo(info, {
 ...options,
 quality: "highestaudio"
 });
 let videoStream: Readable = ytdl.downloadFromInfo(info, {
 ...options,
 quality: "highestvideo"
 });
 // create the ffmpeg process for muxing
 let ffmpegProcess: any = cp.spawn(
 ffmpegPath.path,
 [
 // supress non-crucial messages
 "-loglevel",
 "8",
 "-hide_banner",
 // input audio and video by pipe
 "-i",
 "pipe:3",

 "-i",
 "pipe:4",
 // map audio and video correspondingly

 // no need to change the codec
 // output mp4 and pipe
 "-c:v",
 "libx264",
 "-x264opts",
 "fast_pskip=0:psy=0:deblock=-3,-3",
 "-preset",
 "veryslow",
 "-crf",
 "18",
 "-c",
 "copy",
 "-pix_fmt",
 "yuv420p",
 "-movflags",
 "frag_keyframe+empty_moov",
 "-g",
 "300",
 "-f",
 "mp4",

 "-map",
 "0:v",

 "-map",
 "1:a",

 "pipe:5"
 ],
 {
 // no popup window for Windows users
 windowsHide: true,
 stdio: [
 // silence stdin/out, forward stderr,
 "inherit",
 "inherit",
 "inherit",
 // and pipe audio, video, output
 "pipe",
 "pipe",
 "pipe"
 ]
 }
 );

 audioStream.pipe(ffmpegProcess.stdio[4]);
 videoStream.pipe(ffmpegProcess.stdio[3]);
 ffmpegProcess.stdio[5].pipe(result);
 });
 return result;
};



I am playing around with tons of different arguments. The result of this video gets uploaded to a Google Bucket. Then when seeking in Chrome I am getting some issues with certain frames, they are not being
seeked
.

When I pass it through FFMPEG locally and re-encode it, then upload it, I notice there are no issues.


Here is an image comparing the two results when running
ffmpeg -i FILE
(the one on the left works fine and the differences are minor)



I tried adjusting the arguments in the muxer code and am continuing to try and compare with the re-encoded video. I have no idea why this is happening, something to do with the frames.