
Recherche avancée
Autres articles (103)
-
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
Publier sur MédiaSpip
13 juin 2013Puis-je poster des contenus à partir d’une tablette Ipad ?
Oui, si votre Médiaspip installé est à la version 0.2 ou supérieure. Contacter au besoin l’administrateur de votre MédiaSpip pour le savoir -
Ajouter des informations spécifiques aux utilisateurs et autres modifications de comportement liées aux auteurs
12 avril 2011, parLa manière la plus simple d’ajouter des informations aux auteurs est d’installer le plugin Inscription3. Il permet également de modifier certains comportements liés aux utilisateurs (référez-vous à sa documentation pour plus d’informations).
Il est également possible d’ajouter des champs aux auteurs en installant les plugins champs extras 2 et Interface pour champs extras.
Sur d’autres sites (12771)
-
FFmpeg, videotoolbox and avplayer in iOS
9 janvier 2017, par Hwangho KimI have a question how these things are connected and what they exactly do.
FYI, I have a few experience about video player and encoding and decoding.
In my job I deal udp streaming from server and take it with ffmpeg and decodes it and draw it with openGL. And also using ffmpeg for video player.
These are the questions...
1. Only ffmpeg can decodes UDP streaming (encoded with ffmpeg from the server) or not ?
I found some useful information about videotoolbox which can decode streaming with hardware acceleration in iOS. so could I also decode the streaming from the server with videotoolbox ?
2. If it is possible to decode with videotoolbox (I mean if the videotoolbox could be the replacement for ffmpeg), then what is the videotoolbox source code in ffmpeg ? why it is there ?
In my decoder I make AVCodecContext from the streaming and it has hwaccel and hwaccel_context field which set null both of them. I thought this videotoolbox is kind of API which can help ffmpeg to use hwaccel of iOS. But it looks not true for now...
3. If videotoolbox can decode streaming, Does this also decode for H264 in local ? or only streaming possible ?
AVPlayer is a good tool to play a video but if videotoolbox could replace this AVPlayer then, what’s the benefit ? or impossible ?
4. FFmpeg only uses CPU for decoding (software decoder) or hwaccel also ?
When I play a video with ffmpeg player, CPU usage over 100% and Does it means this ffmpeg uses only software decoder ? or there is a way to use hwaccel ?
Please understand my poor english and any answer would be appreciated.
Thanks.
-
Confusion about PTS in video files and media streams
11 novembre 2014, par user2452253Is it possible that the PTS of a particular frame in a file is different with the PTS of the same frame in the same file while it is being streamed ?
When I read a frame using av_read_frame I store the video stream in an AVStream. After I decode the frame with avcodec_decode_video2, I store the time stamp of that frame in an int64_t using av_frame_get_best_effort_timestamp. Now if the program is getting its input from a file I get a different timestamp from when I stream the input (from the same file) to the program.
To change the input type I simply change the argv argument from "/path/to/file.mp4" to something like "udp ://localhost:1234", then I stream the file with ffmpeg in command line : "ffmpeg -re -i /path/to/file.mp4 -f mpegts udp ://localhost:1234". Can it be because the "-f mpegts" arguments change some characteristics of the media ?
Below is my code (simplified). By reading the ffmpeg mailing list archives I realized that the time_base that I’m looking for is in the AVStream and not the AVCodecContext. Instead of using av_frame_get_best_effort_timestamp I have also tried using the packet.pts but the results don’t change.
I need the time stamps to have a notion of frame number in a streaming video that is being received.
I would really appreciate any sort of help.//..
//argv[1]="/file.mp4";
argv[1]="udp://localhost:7777";
// define AVFormatContext, AVFrame, etc.
// register av, avcodec, avformat_network_init(), etc.
avformat_open_input(&pFormatCtx, argv, NULL, NULL);
avformat_find_stream_info(pFormatCtx, NULL);
// find the video stream...
// pointer to the codec context...
// open codec...
pFrame=av_frame_alloc();
while(av_read_frame(pFormatCtx, &packet)>=0) {
AVStream *strem = pFormatCtx->streams[videoStream];
if(packet.stream_index==videoStream) {
avcodec_decode_video2(pCodecCtx, pFrame, &frameFinished, &packet);
if(frameFinished) {
int64_t perts = av_frame_get_best_effort_timestamp(pFrame);
if (isMyFrame(pFrame)){
cout << perts*av_q2d(strem->time_base) << "\n";
}
}
}
//free allocated space
}
//.. -
Live video encoding using...?
27 décembre 2013, par BasicI'm attempting to write a fairly simplistic application that will stream video/audio from a webcam to someone else across the internet (ala Skypebut with more control).
There seems to be very little useful/relevant information on the subjectand what I can find is largely outdated. From my research so far, x264 seems to be the way to go as it offers an
ultrafast
option which is designed for this situationI'm able to turn on the webcam and receive a stream of images. I can also listen on an audio device and get samples.
Where I'm failing is encoding that information in such a way as to be able to stream with a minimum of latency (from what I've read, 200ms delay is the goal for no obvious lag, including network latency - so let's aim for 100-150ms)
Things I've tried
ffmpeg
This seems to be the most widely used option for encoding. I've had two real issues using it. Firstly, even using x264 with no look-aheads and the bare minimum buffers for stability, the delay seems to be on the order of 700ms using image2pipe. Secondly, it requires ffmpeg to be installed - being able to do this without an external dependency would be nice.
VLC
As with ffmpeg this requires an external program which is a negative. Even worse, I can't seem to get a latency of under 2 seconds which seems to increase over time. I've also only been able to get VLC to capture the camera itself rather than take a stream of images which means I don't get a chance to pre-process them.
DirectShow
I've seen a number of sites recommending using the windows direct show encoders but I haven't been able to find one that works at anything like real time. In fact, the only one I've managed to get going reliably is a Windows Media codec that has a massive latency and fairly large size.
Other considerations
None of the above address the problem of adding an audio stream to the video. I'm not sure if I should attempt to encode them together or send a separate stream alongside the video.
In short, I've been Googling for a week or so now and haven't found a decent way to do this. Can someone please point me at a decent example/guide ?