
Recherche avancée
Médias (91)
-
Les Miserables
9 décembre 2019, par
Mis à jour : Décembre 2019
Langue : français
Type : Textuel
-
VideoHandle
8 novembre 2019, par
Mis à jour : Novembre 2019
Langue : français
Type : Video
-
Somos millones 1
21 juillet 2014, par
Mis à jour : Juin 2015
Langue : français
Type : Video
-
Un test - mauritanie
3 avril 2014, par
Mis à jour : Avril 2014
Langue : français
Type : Textuel
-
Pourquoi Obama lit il mes mails ?
4 février 2014, par
Mis à jour : Février 2014
Langue : français
-
IMG 0222
6 octobre 2013, par
Mis à jour : Octobre 2013
Langue : français
Type : Image
Autres articles (68)
-
(Dés)Activation de fonctionnalités (plugins)
18 février 2011, parPour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...) -
Activation de l’inscription des visiteurs
12 avril 2011, parIl est également possible d’activer l’inscription des visiteurs ce qui permettra à tout un chacun d’ouvrir soit même un compte sur le canal en question dans le cadre de projets ouverts par exemple.
Pour ce faire, il suffit d’aller dans l’espace de configuration du site en choisissant le sous menus "Gestion des utilisateurs". Le premier formulaire visible correspond à cette fonctionnalité.
Par défaut, MediaSPIP a créé lors de son initialisation un élément de menu dans le menu du haut de la page menant (...) -
MediaSPIP : Modification des droits de création d’objets et de publication définitive
11 novembre 2010, parPar défaut, MediaSPIP permet de créer 5 types d’objets.
Toujours par défaut les droits de création et de publication définitive de ces objets sont réservés aux administrateurs, mais ils sont bien entendu configurables par les webmestres.
Ces droits sont ainsi bloqués pour plusieurs raisons : parce que le fait d’autoriser à publier doit être la volonté du webmestre pas de l’ensemble de la plateforme et donc ne pas être un choix par défaut ; parce qu’avoir un compte peut servir à autre choses également, (...)
Sur d’autres sites (9360)
-
How to playback RAW video and audio in VLC ?
24 février 2014, par LaneI have 2 files...
- RAW H264 video
- RAW PCM audio (uncompressed from PCM Mu Law)
...and I am looking to be able to play them in a Java application (using VLCJ possibly). I am able to run the ffmpeg command...
- ffmpeg -i video -i audio -preset ultrafast movie.mp4
...to generate a mp4, but it takes 1/8 of the source length (it takes 1 min to generate a movie for 8 min of RAW data). My problem is that this is not fast enough for me, so I am trying to playback with the RAW sources. I can playback the video with the VLC command...
- vlc video —demux=h264 (if I don't specify this flag, it doesn't work)
...and it plays correctly, but gives me the error...
[0x10028bbe0] main interface error : no suitable interface module
[0x10021d4a0] main libvlc : Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
[0x10aa14950] h264 demux error : this doesn't look like a H264 ES stream, continuing anyway
[0x1003ccb50] main input error : Invalid PCR value in ES_OUT_SET_(GROUP_)PCR !
shader program 1 : WARNING : Output of vertex shader 'TexCoord1' not read by fragment shader
WARNING : Output of vertex shader 'TexCoord2' not read by fragment shader...similarly, I can play the RAW audio with the VLC command...
- vlc audio (note that I do not need to specify the —demux flag)
...so, what I am looking for is...
- How to playback the RAW audio and video together using the VLC CLI ?
- Recommendations for a Java Application solution ?
...thanks !
-
Play stream on NGinx at certain time
6 juin 2020, par EdiusI have VDS with NGinx installed for WebTV. I need to start playing the mp4 file at 8:00AM to stream it via RTMP. How should I change my nginx.conf ? Now it plays file from start point when user press "Play" button. But I need that user see the current point of stream like on TV. My config :



server {
 listen 1935;
 chunk_size 4000;

 play_time_fix off;
 interleave on;
 publish_time_fix on;

 application app {
 live on; 
 exec_play ffmpeg -re -stream_loop 2 -i /var/www/html/video/video.mp4 -c copy -f flv rtmp://.../app/stream;
 } 



-
FFmpeg - Putting segments of same video together
11 juin 2020, par parthlrI am trying to take different segments of the same video and put them together in a new video, essentially cutting out the parts in between the segments. I have built on the answer to this question that I asked before to try and do this. I figured that with putting together segments of the same video, I would have to subtract the first dts of the segment in order for it to start perfectly after the previous segment.



However, when I attempt to do this, I once again get the error
Application provided invalid, non monotonically increasing dts to muxer in stream 0
. This error is both for stream 0 and 1 (video and audio). It seems that I receive this error for only the first packet in each segment.


On top of that, the output file plays the segments in the correct order, but the video freezes for about a second when there is a transition from one segment to the next. I have a feeling that this is because the dts of each packet is not set properly and as a result the segment is encoded about a second after it should be.



This is the code that I have written :



Video and ClipSequence structs :



typedef struct Video {
 char* filename;
 AVFormatContext* inputContext;
 AVFormatContext* outputContext;
 AVCodec* videoCodec;
 AVCodec* audioCodec;
 AVStream* inputStream;
 AVStream* outputStream;
 AVCodecContext* videoCodecContext_I; // Input
 AVCodecContext* audioCodecContext_I; // Input
 AVCodecContext* videoCodecContext_O; // Output
 AVCodecContext* audioCodecContext_O; // Output
 int videoStream;
 int audioStream;
 SwrContext* swrContext;
} Video;

typedef struct ClipSequence {
 VideoList* videos;
 AVFormatContext* outputContext;
 AVStream* outputStream;
 int64_t lastpts, lastdts;
 int64_t currentpts, currentdts;
} ClipSequence;




Decoding and encoding (same for audio) :



int decodeVideoSequence(ClipSequence* sequence, Video* video, AVPacket* packet) {
 int response = avcodec_send_packet(video->videoCodecContext_I, packet);
 if (response < 0) {
 printf("[ERROR] Failed to send video packet to decoder\n");
 return response;
 }
 AVFrame* frame = av_frame_alloc();
 while (response >= 0) {
 response = avcodec_receive_frame(video->videoCodecContext_I, frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 break;
 } else if (response < 0) {
 printf("[ERROR] Failed to receive video frame from decoder\n");
 return response;
 }
 if (response >= 0) {
 // Do stuff and encode

 // Subtract first dts from the current dts
 sequence->v_currentdts = packet->dts - sequence->v_firstdts;

 if (encodeVideoSequence(sequence, video, frame) < 0) {
 printf("[ERROR] Failed to encode new video\n");
 return -1;
 }
 }
 av_frame_unref(frame);
 }
 return 0;
}

int encodeVideoSequence(ClipSequence* sequence, Video* video, AVFrame* frame) {
 AVPacket* packet = av_packet_alloc();
 if (!packet) {
 printf("[ERROR] Could not allocate memory for video output packet\n");
 return -1;
 }
 int response = avcodec_send_frame(video->videoCodecContext_O, frame);
 if (response < 0) {
 printf("[ERROR] Failed to send video frame for encoding\n");
 return response;
 }
 while (response >= 0) {
 response = avcodec_receive_packet(video->videoCodecContext_O, packet);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 break;
 } else if (response < 0) {
 printf("[ERROR] Failed to receive video packet from encoder\n");
 return response;
 }
 // Update dts and pts of video
 packet->duration = VIDEO_PACKET_DURATION;
 int64_t cts = packet->pts - packet->dts;
 packet->dts = sequence->v_currentdts + sequence->v_lastdts + packet->duration;
 packet->pts = packet->dts + cts;
 packet->stream_index = video->videoStream;
 response = av_interleaved_write_frame(sequence->outputContext, packet);
 if (response < 0) {
 printf("[ERROR] Failed to write video packet\n");
 break;
 }
 }
 av_packet_unref(packet);
 av_packet_free(&packet);
 return 0;
}




Cutting the video from a specific range of frames :



int cutVideo(ClipSequence* sequence, Video* video, int startFrame, int endFrame) {
 printf("[WRITE] Cutting video from frame %i to %i\n", startFrame, endFrame);
 // Seeking stream is set to 0 by default and for testing purposes
 if (findPacket(video->inputContext, startFrame, 0) < 0) {
 printf("[ERROR] Failed to find packet\n");
 }
 AVPacket* packet = av_packet_alloc();
 if (!packet) {
 printf("[ERROR] Could not allocate packet for cutting video\n");
 return -1;
 }
 int currentFrame = startFrame;
 bool v_firstframe = true;
 bool a_firstframe = true;
 while (av_read_frame(video->inputContext, packet) >= 0 && currentFrame <= endFrame) {
 if (packet->stream_index == video->videoStream) {
 // Only count video frames since seeking is based on 60 fps video frames
 currentFrame++;
 // Store the first dts
 if (v_firstframe) {
 v_firstframe = false;
 sequence->v_firstdts = packet->dts;
 }
 if (decodeVideoSequence(sequence, video, packet) < 0) {
 printf("[ERROR] Failed to decode and encode video\n");
 return -1;
 }
 } else if (packet->stream_index == video->audioStream) {
 if (a_firstframe) {
 a_firstframe = false;
 sequence->a_firstdts = packet->dts;
 }
 if (decodeAudioSequence(sequence, video, packet) < 0) {
 printf("[ERROR] Failed to decode and encode audio\n");
 return -1;
 }
 }
 av_packet_unref(packet);
 }
 sequence->v_lastdts += sequence->v_currentdts;
 sequence->a_lastdts += sequence->a_currentdts;
 return 0;
}




Finding correct place in video to start :



int findPacket(AVFormatContext* inputContext, int frameIndex, int stream) {
 int64_t timebase;
 if (stream < 0) {
 timebase = AV_TIME_BASE;
 } else if (stream >= 0) {
 timebase = (inputContext->streams[stream]->time_base.den) / inputContext->streams[stream]->time_base.num;
 }
 int64_t seekTarget = timebase * frameIndex / VIDEO_DEFAULT_FPS;
 if (av_seek_frame(inputContext, stream, seekTarget, AVSEEK_FLAG_ANY) < 0) {
 printf("[ERROR] Failed to find keyframe from frame index %i\n", frameIndex);
 return -1;
 }
 return 0;
}




UPDATE :



I have achieved the desired result, but not in the way that I wanted to. I took each segment and encoded them to a separate video file. Then, I took those separate videos and encoded them into one sequence of videos. However, this isn't the optimal method of achieving what I want. It's definitely a lot slower and I have written a lot more code than I believe I should have. I still don't know what the issue is to my original problem, and I would greatly appreciate any help.