
Recherche avancée
Médias (2)
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (78)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Menus personnalisés
14 novembre 2010, parMediaSPIP utilise le plugin Menus pour gérer plusieurs menus configurables pour la navigation.
Cela permet de laisser aux administrateurs de canaux la possibilité de configurer finement ces menus.
Menus créés à l’initialisation du site
Par défaut trois menus sont créés automatiquement à l’initialisation du site : Le menu principal ; Identifiant : barrenav ; Ce menu s’insère en général en haut de la page après le bloc d’entête, son identifiant le rend compatible avec les squelettes basés sur Zpip ; (...)
Sur d’autres sites (6684)
-
Confusion about PTS in video files and media streams
11 novembre 2014, par user2452253Is it possible that the PTS of a particular frame in a file is different with the PTS of the same frame in the same file while it is being streamed ?
When I read a frame using av_read_frame I store the video stream in an AVStream. After I decode the frame with avcodec_decode_video2, I store the time stamp of that frame in an int64_t using av_frame_get_best_effort_timestamp. Now if the program is getting its input from a file I get a different timestamp from when I stream the input (from the same file) to the program.
To change the input type I simply change the argv argument from "/path/to/file.mp4" to something like "udp ://localhost:1234", then I stream the file with ffmpeg in command line : "ffmpeg -re -i /path/to/file.mp4 -f mpegts udp ://localhost:1234". Can it be because the "-f mpegts" arguments change some characteristics of the media ?
Below is my code (simplified). By reading the ffmpeg mailing list archives I realized that the time_base that I’m looking for is in the AVStream and not the AVCodecContext. Instead of using av_frame_get_best_effort_timestamp I have also tried using the packet.pts but the results don’t change.
I need the time stamps to have a notion of frame number in a streaming video that is being received.
I would really appreciate any sort of help.//..
//argv[1]="/file.mp4";
argv[1]="udp://localhost:7777";
// define AVFormatContext, AVFrame, etc.
// register av, avcodec, avformat_network_init(), etc.
avformat_open_input(&pFormatCtx, argv, NULL, NULL);
avformat_find_stream_info(pFormatCtx, NULL);
// find the video stream...
// pointer to the codec context...
// open codec...
pFrame=av_frame_alloc();
while(av_read_frame(pFormatCtx, &packet)>=0) {
AVStream *strem = pFormatCtx->streams[videoStream];
if(packet.stream_index==videoStream) {
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if(frameFinished) {
int64_t perts = av_frame_get_best_effort_timestamp(pFrame);
if (isMyFrame(pFrame)){
cout << perts*av_q2d(strem->time_base) << "\n";
}
}
}
//free allocated space
}
//.. -
Live video encoding using...?
27 décembre 2013, par BasicI'm attempting to write a fairly simplistic application that will stream video/audio from a webcam to someone else across the internet (ala Skypebut with more control).
There seems to be very little useful/relevant information on the subjectand what I can find is largely outdated. From my research so far, x264 seems to be the way to go as it offers an
ultrafast
option which is designed for this situationI'm able to turn on the webcam and receive a stream of images. I can also listen on an audio device and get samples.
Where I'm failing is encoding that information in such a way as to be able to stream with a minimum of latency (from what I've read, 200ms delay is the goal for no obvious lag, including network latency - so let's aim for 100-150ms)
Things I've tried
ffmpeg
This seems to be the most widely used option for encoding. I've had two real issues using it. Firstly, even using x264 with no look-aheads and the bare minimum buffers for stability, the delay seems to be on the order of 700ms using image2pipe. Secondly, it requires ffmpeg to be installed - being able to do this without an external dependency would be nice.
VLC
As with ffmpeg this requires an external program which is a negative. Even worse, I can't seem to get a latency of under 2 seconds which seems to increase over time. I've also only been able to get VLC to capture the camera itself rather than take a stream of images which means I don't get a chance to pre-process them.
DirectShow
I've seen a number of sites recommending using the windows direct show encoders but I haven't been able to find one that works at anything like real time. In fact, the only one I've managed to get going reliably is a Windows Media codec that has a massive latency and fairly large size.
Other considerations
None of the above address the problem of adding an audio stream to the video. I'm not sure if I should attempt to encode them together or send a separate stream alongside the video.
In short, I've been Googling for a week or so now and haven't found a decent way to do this. Can someone please point me at a decent example/guide ?
-
how to allow a worker to run a ffmpeg command on heroku for my python/django app ?
10 mars 2013, par GetItDoneI've been stuck trying to figure this out for weeks. I previously asked a similar question found here but I never got any replies. I really cannot find any good documentation anywhere. All I need to do is use a worker (don't care what worker have django-celery and rq installed) to convert a file to flv when it is uploaded from a form. I was able to get this done easily locally, but after over a week I haven't been able to get it to work no matter what I have tried. I tried adding a tasks.py file for celery, or a worker.py file for rq, and I have no idea what else (if anything) needs to be done, such as in my settings.py or Procfile. My procfile looks like :
web: gunicorn lftv.wsgi -b 0.0.0.0:$PORT
celeryd: celery -A tasks worker --loglevel=info
worker: python worker.pyMy requirements.txt showing what I have installed looks like this :
Django==1.4.3
Logbook==0.4.1
amqp==1.0.6
anyjson==0.3.3
billiard==2.7.3.19
boto==2.6.0
celery==3.0.13
celery-with-redis==3.0
distribute==0.6.31
dj-database-url==0.2.1
django-celery==3.0.11
django-s3-folder-storage==0.1
django-storages==1.1.6
gunicorn==0.16.1
kombu==2.5.4
pil==1.1.7
psycopg2==2.4.5
python-dateutil==1.5
pytz==2012j
redis==2.7.2
requests==1.1.0
rq==0.3.2
six==1.2.0
times==0.6The only thing relevant in my settings.py are as follows :
BROKER_BACKEND = 'django'
BROKER_URL = #For this I copy/pasted the code from my redistogo add-on from heroku. Not sure if correct
BROKER_TRANSPORT_OPTIONS = {'visibility_timeout': 1800}Without trying to take up too much more space, my tasks.py looks like this :
import subprocess
@task
def ffmpeg_conversion(input_file):
converted_file = subprocess.call(input_file)
return converted_fileI use S3 to store my static and media files, and the upload works (adding uploads to my bucket), however no matter what I try the conversion never will. Is there a good tutorial for absolute beginners ? I followed the heroku redis tutorial, celery docs, rq docs, and whatever else I can find, and got the examples to work, but the worker will not execute the command from my view. For example one of the many things I tried :
...
ffmpeg = "ffmpeg -i %s -acodec mp3 -ar 22050 -f flv -s 320x240 %s" % (sourcefile, targetfile)
ffmpegresult = ffmpeg_conversion.delay(ffmpeg)
...or using rq
...
q = Queue(connection=conn)
result = q.enqueue(ffmpeg_conversion, ffmpeg)
...I seems like it should be simple, however I am completely self-taught and have never deployed a project whatsoever, and there just doesn't seem to be any good documentation or tutorial available for what I am trying to do. I can't judge whether I am completely off and completely missing something significant or relatively close to getting this to work. I really do appreciate any input whatsoever, this is driving me nuts. Thanks in advance.