
Recherche avancée
Médias (1)
-
La conservation du net art au musée. Les stratégies à l’œuvre
26 mai 2011
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (72)
-
Submit bugs and patches
13 avril 2011Unfortunately a software is never perfect.
If you think you have found a bug, report it using our ticket system. Please to help us to fix it by providing the following information : the browser you are using, including the exact version as precise an explanation as possible of the problem if possible, the steps taken resulting in the problem a link to the site / page in question
If you think you have solved the bug, fill in a ticket and attach to it a corrective patch.
You may also (...) -
MediaSPIP 0.1 Beta version
25 avril 2011, parMediaSPIP 0.1 beta is the first version of MediaSPIP proclaimed as "usable".
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 is the first MediaSPIP stable release.
Its official release date is June 21, 2013 and is announced here.
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)
Sur d’autres sites (7174)
-
WebRTC predictions for 2016
17 février 2016, par silviaI wrote these predictions in the first week of January and meant to publish them as encouragement to think about where WebRTC still needs some work. I’d like to be able to compare the state of WebRTC in the browser a year from now. Therefore, without further ado, here are my thoughts.
WebRTC Browser support
I’m quite optimistic when it comes to browser support for WebRTC. We have seen Edge bring in initial support last year and Apple looking to hire engineers to implement WebRTC. My prediction is that we will see the following developments in 2016 :
- Edge will become interoperable with Chrome and Firefox, i.e. it will publish VP8/VP9 and H.264/H.265 support
- Firefox of course continues to support both VP8/VP9 and H.264/H.265
- Chrome will follow the spec and implement H.264/H.265 support (to add to their already existing VP8/VP9 support)
- Safari will enter the WebRTC space but only with H.264/H.265 support
Codec Observations
With Edge and Safari entering the WebRTC space, there will be a larger focus on H.264/H.265. It will help with creating interoperability between the browsers.
However, since there are so many flavours of H.264/H.265, I expect that when different browsers are used at different endpoints, we will get poor quality video calls because of having to negotiate a common denominator. Certainly, baseline will work interoperably, but better encoding quality and lower bandwidth will only be achieved if all endpoints use the same browser.
Thus, we will get to the funny situation where we buy ourselves interoperability at the cost of video quality and bandwidth. I’d call that a “degree of interoperability” and not the best possible outcome.
I’m going to go out on a limb and say that at this stage, Google is going to consider strongly to improve the case of VP8/VP9 by improving its bandwidth adaptability : I think they will buy themselves some SVC capability and make VP9 the best quality codec for live video conferencing. Thus, when Safari eventually follows the standard and also implements VP8/VP9 support, the interoperability win of H.264/H.265 will become only temporary overshadowed by a vastly better video quality when using VP9.
The Enterprise Boundary
Like all video conferencing technology, WebRTC is having a hard time dealing with the corporate boundary : firewalls and proxies get in the way of setting up video connections from within an enterprise to people outside.
The telco world has come up with the concept of SBCs (session border controller). SBCs come packed with functionality to deal with security, signalling protocol translation, Quality of Service policing, regulatory requirements, statistics, billing, and even media service like transcoding.
SBCs are a total overkill for a world where a large number of Web applications simply want to add a WebRTC feature – probably mostly to provide a video or audio customer support service, but it could be a live training session with call-in, or an interest group conference all.
We cannot install a custom SBC solution for every WebRTC service provider in every enterprise. That’s like saying we need a custom Web proxy for every Web server. It doesn’t scale.
Cloud services thrive on their ability to sell directly to an individual in an organisation on their credit card without that individual having to ask their IT department to put special rules in place. WebRTC will not make progress in the corporate environment unless this is fixed.
We need a solution that allows all WebRTC services to get through an enterprise firewall and enterprise proxy. I think the WebRTC standards have done pretty well with firewalls and connecting to a TURN server on port 443 will do the trick most of the time. But enterprise proxies are the next frontier.
What it takes is some kind of media packet forwarding service that sits on the firewall or in a proxy and allows WebRTC media packets through – maybe with some configuration that is necessary in the browsers or the Web app to add this service as another type of TURN server.
I don’t have a full understanding of the problems involved, but I think such a solution is vital before WebRTC can go mainstream. I expect that this year we will see some clever people coming up with a solution for this and a new type of product will be born and rolled out to enterprises around the world.
Summary
So these are my predictions. In summary, they address the key areas where I think WebRTC still has to make progress : interoperability between browsers, video quality at low bitrates, and the enterprise boundary. I’m really curious to see where we stand with these a year from now.
—
It’s worth mentioning Philipp Hancke’s tweet reply to my post :
https://datatracker.ietf.org/doc/draft-ietf-rtcweb-return/ … — we saw some clever people come up with a solution already. Now it needs to be implemented
-
Add silent track to WebM video
18 mars 2020, par ChognificentFFMPEG documentation says this can be achieved by doing something similar to the following :
ffmpeg -f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 -i $media -shortest -c:v copy -c:a aac $output
However when i try this with a WebM video I get the following error :
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
Error initializing output stream 0:1I there a workaround for this ? Thanks.
-
FFmpeg - Putting segments of same video together
11 juin 2020, par parthlrI am trying to take different segments of the same video and put them together in a new video, essentially cutting out the parts in between the segments. I have built on the answer to this question that I asked before to try and do this. I figured that with putting together segments of the same video, I would have to subtract the first dts of the segment in order for it to start perfectly after the previous segment.



However, when I attempt to do this, I once again get the error
Application provided invalid, non monotonically increasing dts to muxer in stream 0
. This error is both for stream 0 and 1 (video and audio). It seems that I receive this error for only the first packet in each segment.


On top of that, the output file plays the segments in the correct order, but the video freezes for about a second when there is a transition from one segment to the next. I have a feeling that this is because the dts of each packet is not set properly and as a result the segment is encoded about a second after it should be.



This is the code that I have written :



Video and ClipSequence structs :



typedef struct Video {
 char* filename;
 AVFormatContext* inputContext;
 AVFormatContext* outputContext;
 AVCodec* videoCodec;
 AVCodec* audioCodec;
 AVStream* inputStream;
 AVStream* outputStream;
 AVCodecContext* videoCodecContext_I; // Input
 AVCodecContext* audioCodecContext_I; // Input
 AVCodecContext* videoCodecContext_O; // Output
 AVCodecContext* audioCodecContext_O; // Output
 int videoStream;
 int audioStream;
 SwrContext* swrContext;
} Video;

typedef struct ClipSequence {
 VideoList* videos;
 AVFormatContext* outputContext;
 AVStream* outputStream;
 int64_t lastpts, lastdts;
 int64_t currentpts, currentdts;
} ClipSequence;




Decoding and encoding (same for audio) :



int decodeVideoSequence(ClipSequence* sequence, Video* video, AVPacket* packet) {
 int response = avcodec_send_packet(video->videoCodecContext_I, packet);
 if (response < 0) {
 printf("[ERROR] Failed to send video packet to decoder\n");
 return response;
 }
 AVFrame* frame = av_frame_alloc();
 while (response >= 0) {
 response = avcodec_receive_frame(video->videoCodecContext_I, frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 break;
 } else if (response < 0) {
 printf("[ERROR] Failed to receive video frame from decoder\n");
 return response;
 }
 if (response >= 0) {
 // Do stuff and encode

 // Subtract first dts from the current dts
 sequence->v_currentdts = packet->dts - sequence->v_firstdts;

 if (encodeVideoSequence(sequence, video, frame) < 0) {
 printf("[ERROR] Failed to encode new video\n");
 return -1;
 }
 }
 av_frame_unref(frame);
 }
 return 0;
}

int encodeVideoSequence(ClipSequence* sequence, Video* video, AVFrame* frame) {
 AVPacket* packet = av_packet_alloc();
 if (!packet) {
 printf("[ERROR] Could not allocate memory for video output packet\n");
 return -1;
 }
 int response = avcodec_send_frame(video->videoCodecContext_O, frame);
 if (response < 0) {
 printf("[ERROR] Failed to send video frame for encoding\n");
 return response;
 }
 while (response >= 0) {
 response = avcodec_receive_packet(video->videoCodecContext_O, packet);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 break;
 } else if (response < 0) {
 printf("[ERROR] Failed to receive video packet from encoder\n");
 return response;
 }
 // Update dts and pts of video
 packet->duration = VIDEO_PACKET_DURATION;
 int64_t cts = packet->pts - packet->dts;
 packet->dts = sequence->v_currentdts + sequence->v_lastdts + packet->duration;
 packet->pts = packet->dts + cts;
 packet->stream_index = video->videoStream;
 response = av_interleaved_write_frame(sequence->outputContext, packet);
 if (response < 0) {
 printf("[ERROR] Failed to write video packet\n");
 break;
 }
 }
 av_packet_unref(packet);
 av_packet_free(&packet);
 return 0;
}




Cutting the video from a specific range of frames :



int cutVideo(ClipSequence* sequence, Video* video, int startFrame, int endFrame) {
 printf("[WRITE] Cutting video from frame %i to %i\n", startFrame, endFrame);
 // Seeking stream is set to 0 by default and for testing purposes
 if (findPacket(video->inputContext, startFrame, 0) < 0) {
 printf("[ERROR] Failed to find packet\n");
 }
 AVPacket* packet = av_packet_alloc();
 if (!packet) {
 printf("[ERROR] Could not allocate packet for cutting video\n");
 return -1;
 }
 int currentFrame = startFrame;
 bool v_firstframe = true;
 bool a_firstframe = true;
 while (av_read_frame(video->inputContext, packet) >= 0 && currentFrame <= endFrame) {
 if (packet->stream_index == video->videoStream) {
 // Only count video frames since seeking is based on 60 fps video frames
 currentFrame++;
 // Store the first dts
 if (v_firstframe) {
 v_firstframe = false;
 sequence->v_firstdts = packet->dts;
 }
 if (decodeVideoSequence(sequence, video, packet) < 0) {
 printf("[ERROR] Failed to decode and encode video\n");
 return -1;
 }
 } else if (packet->stream_index == video->audioStream) {
 if (a_firstframe) {
 a_firstframe = false;
 sequence->a_firstdts = packet->dts;
 }
 if (decodeAudioSequence(sequence, video, packet) < 0) {
 printf("[ERROR] Failed to decode and encode audio\n");
 return -1;
 }
 }
 av_packet_unref(packet);
 }
 sequence->v_lastdts += sequence->v_currentdts;
 sequence->a_lastdts += sequence->a_currentdts;
 return 0;
}




Finding correct place in video to start :



int findPacket(AVFormatContext* inputContext, int frameIndex, int stream) {
 int64_t timebase;
 if (stream < 0) {
 timebase = AV_TIME_BASE;
 } else if (stream >= 0) {
 timebase = (inputContext->streams[stream]->time_base.den) / inputContext->streams[stream]->time_base.num;
 }
 int64_t seekTarget = timebase * frameIndex / VIDEO_DEFAULT_FPS;
 if (av_seek_frame(inputContext, stream, seekTarget, AVSEEK_FLAG_ANY) < 0) {
 printf("[ERROR] Failed to find keyframe from frame index %i\n", frameIndex);
 return -1;
 }
 return 0;
}




UPDATE :



I have achieved the desired result, but not in the way that I wanted to. I took each segment and encoded them to a separate video file. Then, I took those separate videos and encoded them into one sequence of videos. However, this isn't the optimal method of achieving what I want. It's definitely a lot slower and I have written a lot more code than I believe I should have. I still don't know what the issue is to my original problem, and I would greatly appreciate any help.