Recherche avancée
Médias (1)
-
Video d’abeille en portrait
14 mai 2011, par
Mis à jour : Février 2012
Langue : français
Type : Video
Autres articles (69)
-
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...) -
Mediabox : ouvrir les images dans l’espace maximal pour l’utilisateur
8 février 2011, parLa visualisation des images est restreinte par la largeur accordée par le design du site (dépendant du thème utilisé). Elles sont donc visibles sous un format réduit. Afin de profiter de l’ensemble de la place disponible sur l’écran de l’utilisateur, il est possible d’ajouter une fonctionnalité d’affichage de l’image dans une boite multimedia apparaissant au dessus du reste du contenu.
Pour ce faire il est nécessaire d’installer le plugin "Mediabox".
Configuration de la boite multimédia
Dès (...) -
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page.
Sur d’autres sites (8450)
-
Modifying FFmpeg and OpenCV source code to capture the RTP timestamp for each packet in NTP format
22 août 2019, par Fr0styI was trying a little experiment in order to get the timestamps of the RTP packets using the VideoCapture class from Opencv’s source code in python, also had to modify FFmpeg to accommodate the changes in Opencv.
Since I read about the RTP packet format.Wanted to fiddle around and see if I could manage to find a way to get the NTP timestamps. Was unable to find any reliable help in trying to get RTP timestamps. So tried out this little hack.
Credits to ryantheseer on github for the modified code.
Version of FFmpeg : 3.2.3
Version of Opencv : 3.2.0In Opencv source code :
modules/videoio/include/opencv2/videoio.hpp :
Added two getters for the RTP timestamp :
.....
/** @brief Gets the upper bytes of the RTP time stamp in NTP format (seconds).
*/
CV_WRAP virtual int64 getRTPTimeStampSeconds() const;
/** @brief Gets the lower bytes of the RTP time stamp in NTP format (fraction of seconds).
*/
CV_WRAP virtual int64 getRTPTimeStampFraction() const;
.....modules/videoio/src/cap.cpp :
Added an import and added the implementation of the timestamp getter :
....
#include <cstdint>
....
....
static inline uint64_t icvGetRTPTimeStamp(const CvCapture* capture)
{
return capture ? capture->getRTPTimeStamp() : 0;
}
...
</cstdint>Added the C++ timestamp getters in the VideoCapture class :
....
/**@brief Gets the upper bytes of the RTP time stamp in NTP format (seconds).
*/
int64 VideoCapture::getRTPTimeStampSeconds() const
{
int64 seconds = 0;
uint64_t timestamp = 0;
//Get the time stamp from the capture object
if (!icap.empty())
timestamp = icap->getRTPTimeStamp();
else
timestamp = icvGetRTPTimeStamp(cap);
//Take the top 32 bytes of the time stamp
seconds = (int64)((timestamp & 0xFFFFFFFF00000000) / 0x100000000);
return seconds;
}
/**@brief Gets the lower bytes of the RTP time stamp in NTP format (seconds).
*/
int64 VideoCapture::getRTPTimeStampFraction() const
{
int64 fraction = 0;
uint64_t timestamp = 0;
//Get the time stamp from the capture object
if (!icap.empty())
timestamp = icap->getRTPTimeStamp();
else
timestamp = icvGetRTPTimeStamp(cap);
//Take the bottom 32 bytes of the time stamp
fraction = (int64)((timestamp & 0xFFFFFFFF));
return fraction;
}
...modules/videoio/src/cap_ffmpeg.cpp :
Added an import :
...
#include <cstdint>
...
</cstdint>Added a method reference definition :
...
static CvGetRTPTimeStamp_Plugin icvGetRTPTimeStamp_FFMPEG_p = 0;
...Added the method to the module initializer method :
...
if( icvFFOpenCV )
...
...
icvGetRTPTimeStamp_FFMPEG_p =
(CvGetRTPTimeStamp_Plugin)GetProcAddress(icvFFOpenCV, "cvGetRTPTimeStamp_FFMPEG");
...
...
icvWriteFrame_FFMPEG_p != 0 &&
icvGetRTPTimeStamp_FFMPEG_p !=0)
...
icvGetRTPTimeStamp_FFMPEG_p = (CvGetRTPTimeStamp_Plugin)cvGetRTPTimeStamp_FFMPEG;Implemented the getter interface :
...
virtual uint64_t getRTPTimeStamp() const
{
return ffmpegCapture ? icvGetRTPTimeStamp_FFMPEG_p(ffmpegCapture) : 0;
}
...In FFmpeg’s source code :
libavcodec/avcodec.h :
Added the NTP timestamp definition to the AVPacket struct :
typedef struct AVPacket {
...
...
uint64_t rtp_ntp_time_stamp;
}libavformat/rtpdec.c :
Store the ntp time stamp in the struct in the finalize_packet method :
static void finalize_packet(RTPDemuxContext *s, AVPacket *pkt, uint32_t timestamp)
{
uint64_t offsetTime = 0;
uint64_t rtp_ntp_time_stamp = timestamp;
...
...
/*RM: Sets the RTP time stamp in the AVPacket */
if (!s->last_rtcp_ntp_time || !s->last_rtcp_timestamp)
offsetTime = 0;
else
offsetTime = s->last_rtcp_ntp_time - ((uint64_t)(s->last_rtcp_timestamp) * 65536);
rtp_ntp_time_stamp = ((uint64_t)(timestamp) * 65536) + offsetTime;
pkt->rtp_ntp_time_stamp = rtp_ntp_time_stamp;libavformat/utils.c :
Copy the ntp time stamp from the packet to the frame in the read_frame_internal method :
static int read_frame_internal(AVFormatContext *s, AVPacket *pkt)
{
...
uint64_t rtp_ntp_time_stamp = 0;
...
while (!got_packet && !s->internal->parse_queue) {
...
//COPY OVER the RTP time stamp TODO: just create a local copy
rtp_ntp_time_stamp = cur_pkt.rtp_ntp_time_stamp;
...
#if FF_API_LAVF_AVCTX
update_stream_avctx(s);
#endif
if (s->debug & FF_FDEBUG_TS)
av_log(s, AV_LOG_DEBUG,
"read_frame_internal stream=%d, pts=%s, dts=%s, "
"size=%d, duration=%"PRId64", flags=%d\n",
pkt->stream_index,
av_ts2str(pkt->pts),
av_ts2str(pkt->dts),
pkt->size, pkt->duration, pkt->flags);
pkt->rtp_ntp_time_stamp = rtp_ntp_time_stamp; #Just added this line in the if statement.
return ret;My python code to utilise these changes :
import cv2
uri = 'rtsp://admin:password@192.168.1.67:554'
cap = cv2.VideoCapture(uri)
while True:
frame_exists, curr_frame = cap.read()
# if frame_exists:
k = cap.getRTPTimeStampSeconds()
l = cap.getRTPTimeStampFraction()
time_shift = 0x100000000
#because in the getRTPTimeStampSeconds()
#function, seconds was multiplied by 0x10000000
seconds = time_shift * k
m = (time_shift * k) + l
print("Imagetimestamp: %i" % m)
cap.release()What I am getting as my output :
Imagetimestamp: 0
Imagetimestamp: 212041451700224
Imagetimestamp: 212041687629824
Imagetimestamp: 212041923559424
Imagetimestamp: 212042159489024
Imagetimestamp: 212042395418624
Imagetimestamp: 212042631348224
...What astounded me the most was that when i powered off the ip camera and powered it back on, timestamp would start from 0 then quickly increments. I read NTP time format is relative to January 1, 1900 00:00. Even when I tried calculating the offset, and accounting between now and 01-01-1900, I still ended up getting a crazy high number for the date.
Don’t know if I calculated it wrong. I have a feeling it’s very off or what I am getting is not the timestamp.
-
FFmpeg - Putting segments of same video together
11 juin 2020, par parthlrI am trying to take different segments of the same video and put them together in a new video, essentially cutting out the parts in between the segments. I have built on the answer to this question that I asked before to try and do this. I figured that with putting together segments of the same video, I would have to subtract the first dts of the segment in order for it to start perfectly after the previous segment.



However, when I attempt to do this, I once again get the error
Application provided invalid, non monotonically increasing dts to muxer in stream 0. This error is both for stream 0 and 1 (video and audio). It seems that I receive this error for only the first packet in each segment.


On top of that, the output file plays the segments in the correct order, but the video freezes for about a second when there is a transition from one segment to the next. I have a feeling that this is because the dts of each packet is not set properly and as a result the segment is encoded about a second after it should be.



This is the code that I have written :



Video and ClipSequence structs :



typedef struct Video {
 char* filename;
 AVFormatContext* inputContext;
 AVFormatContext* outputContext;
 AVCodec* videoCodec;
 AVCodec* audioCodec;
 AVStream* inputStream;
 AVStream* outputStream;
 AVCodecContext* videoCodecContext_I; // Input
 AVCodecContext* audioCodecContext_I; // Input
 AVCodecContext* videoCodecContext_O; // Output
 AVCodecContext* audioCodecContext_O; // Output
 int videoStream;
 int audioStream;
 SwrContext* swrContext;
} Video;

typedef struct ClipSequence {
 VideoList* videos;
 AVFormatContext* outputContext;
 AVStream* outputStream;
 int64_t lastpts, lastdts;
 int64_t currentpts, currentdts;
} ClipSequence;



Decoding and encoding (same for audio) :



int decodeVideoSequence(ClipSequence* sequence, Video* video, AVPacket* packet) {
 int response = avcodec_send_packet(video->videoCodecContext_I, packet);
 if (response < 0) {
 printf("[ERROR] Failed to send video packet to decoder\n");
 return response;
 }
 AVFrame* frame = av_frame_alloc();
 while (response >= 0) {
 response = avcodec_receive_frame(video->videoCodecContext_I, frame);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 break;
 } else if (response < 0) {
 printf("[ERROR] Failed to receive video frame from decoder\n");
 return response;
 }
 if (response >= 0) {
 // Do stuff and encode

 // Subtract first dts from the current dts
 sequence->v_currentdts = packet->dts - sequence->v_firstdts;

 if (encodeVideoSequence(sequence, video, frame) < 0) {
 printf("[ERROR] Failed to encode new video\n");
 return -1;
 }
 }
 av_frame_unref(frame);
 }
 return 0;
}

int encodeVideoSequence(ClipSequence* sequence, Video* video, AVFrame* frame) {
 AVPacket* packet = av_packet_alloc();
 if (!packet) {
 printf("[ERROR] Could not allocate memory for video output packet\n");
 return -1;
 }
 int response = avcodec_send_frame(video->videoCodecContext_O, frame);
 if (response < 0) {
 printf("[ERROR] Failed to send video frame for encoding\n");
 return response;
 }
 while (response >= 0) {
 response = avcodec_receive_packet(video->videoCodecContext_O, packet);
 if (response == AVERROR(EAGAIN) || response == AVERROR_EOF) {
 break;
 } else if (response < 0) {
 printf("[ERROR] Failed to receive video packet from encoder\n");
 return response;
 }
 // Update dts and pts of video
 packet->duration = VIDEO_PACKET_DURATION;
 int64_t cts = packet->pts - packet->dts;
 packet->dts = sequence->v_currentdts + sequence->v_lastdts + packet->duration;
 packet->pts = packet->dts + cts;
 packet->stream_index = video->videoStream;
 response = av_interleaved_write_frame(sequence->outputContext, packet);
 if (response < 0) {
 printf("[ERROR] Failed to write video packet\n");
 break;
 }
 }
 av_packet_unref(packet);
 av_packet_free(&packet);
 return 0;
}



Cutting the video from a specific range of frames :



int cutVideo(ClipSequence* sequence, Video* video, int startFrame, int endFrame) {
 printf("[WRITE] Cutting video from frame %i to %i\n", startFrame, endFrame);
 // Seeking stream is set to 0 by default and for testing purposes
 if (findPacket(video->inputContext, startFrame, 0) < 0) {
 printf("[ERROR] Failed to find packet\n");
 }
 AVPacket* packet = av_packet_alloc();
 if (!packet) {
 printf("[ERROR] Could not allocate packet for cutting video\n");
 return -1;
 }
 int currentFrame = startFrame;
 bool v_firstframe = true;
 bool a_firstframe = true;
 while (av_read_frame(video->inputContext, packet) >= 0 && currentFrame <= endFrame) {
 if (packet->stream_index == video->videoStream) {
 // Only count video frames since seeking is based on 60 fps video frames
 currentFrame++;
 // Store the first dts
 if (v_firstframe) {
 v_firstframe = false;
 sequence->v_firstdts = packet->dts;
 }
 if (decodeVideoSequence(sequence, video, packet) < 0) {
 printf("[ERROR] Failed to decode and encode video\n");
 return -1;
 }
 } else if (packet->stream_index == video->audioStream) {
 if (a_firstframe) {
 a_firstframe = false;
 sequence->a_firstdts = packet->dts;
 }
 if (decodeAudioSequence(sequence, video, packet) < 0) {
 printf("[ERROR] Failed to decode and encode audio\n");
 return -1;
 }
 }
 av_packet_unref(packet);
 }
 sequence->v_lastdts += sequence->v_currentdts;
 sequence->a_lastdts += sequence->a_currentdts;
 return 0;
}



Finding correct place in video to start :



int findPacket(AVFormatContext* inputContext, int frameIndex, int stream) {
 int64_t timebase;
 if (stream < 0) {
 timebase = AV_TIME_BASE;
 } else if (stream >= 0) {
 timebase = (inputContext->streams[stream]->time_base.den) / inputContext->streams[stream]->time_base.num;
 }
 int64_t seekTarget = timebase * frameIndex / VIDEO_DEFAULT_FPS;
 if (av_seek_frame(inputContext, stream, seekTarget, AVSEEK_FLAG_ANY) < 0) {
 printf("[ERROR] Failed to find keyframe from frame index %i\n", frameIndex);
 return -1;
 }
 return 0;
}



UPDATE :



I have achieved the desired result, but not in the way that I wanted to. I took each segment and encoded them to a separate video file. Then, I took those separate videos and encoded them into one sequence of videos. However, this isn't the optimal method of achieving what I want. It's definitely a lot slower and I have written a lot more code than I believe I should have. I still don't know what the issue is to my original problem, and I would greatly appreciate any help.


-
Does anyone know any filters for better low quality video ?
7 septembre 2022, par kastenSo maybe my question can be closed, but anyway I'm researching and looking for a tool that can do the following with video files :


Here's an example of what I want :


When you put a low quality video on your TV and look into a mirror that reflects that image, it appears to be sharper, acting as a filter to improve the video.


I don't know if anyone has thought of this fact or if there is a software that does something similar. I know low quality video can't get any better, but why is there an improvement when looking in the mirror ?


I appreciate if anyone can comment, as I'm not a professional in video.