
Recherche avancée
Médias (1)
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
Autres articles (99)
-
Amélioration de la version de base
13 septembre 2013Jolie sélection multiple
Le plugin Chosen permet d’améliorer l’ergonomie des champs de sélection multiple. Voir les deux images suivantes pour comparer.
Il suffit pour cela d’activer le plugin Chosen (Configuration générale du site > Gestion des plugins), puis de configurer le plugin (Les squelettes > Chosen) en activant l’utilisation de Chosen dans le site public et en spécifiant les éléments de formulaires à améliorer, par exemple select[multiple] pour les listes à sélection multiple (...) -
Emballe médias : à quoi cela sert ?
4 février 2011, parCe plugin vise à gérer des sites de mise en ligne de documents de tous types.
Il crée des "médias", à savoir : un "média" est un article au sens SPIP créé automatiquement lors du téléversement d’un document qu’il soit audio, vidéo, image ou textuel ; un seul document ne peut être lié à un article dit "média" ; -
Configuration spécifique pour PHP5
4 février 2011, parPHP5 est obligatoire, vous pouvez l’installer en suivant ce tutoriel spécifique.
Il est recommandé dans un premier temps de désactiver le safe_mode, cependant, s’il est correctement configuré et que les binaires nécessaires sont accessibles, MediaSPIP devrait fonctionner correctement avec le safe_mode activé.
Modules spécifiques
Il est nécessaire d’installer certains modules PHP spécifiques, via le gestionnaire de paquet de votre distribution ou manuellement : php5-mysql pour la connectivité avec la (...)
Sur d’autres sites (12825)
-
Having trouble compiling ffmpeg code in command terminal
10 septembre 2019, par m00ncakeI am having a bit of trouble compiling my c++ code in my terminal. I have ffmpeg installed as shown below.
ffmpeg version n4.1 Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 7 (Ubuntu 7.4.0-1ubuntu1~18.04.1)
configuration: --enable-gpl --enable-version3 --disable-static --enable-shared --enable-small --enable-avisynth --enable-chromaprint --enable-frei0r --enable-gmp --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-librtmp --enable-libshine --enable-libsmbclient --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtesseract --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxvid --enable-libxml2 --enable-libzmq --enable-libzvbi --enable-lv2 --enable-libmysofa --enable-openal --enable-opencl --enable-opengl --enable-libdrm
libavutil 56. 22.100 / 56. 22.100
libavcodec 58. 35.100 / 58. 35.100
libavformat 58. 20.100 / 58. 20.100
libavdevice 58. 5.100 / 58. 5.100
libavfilter 7. 40.101 / 7. 40.101
libswscale 5. 3.100 / 5. 3.100
libswresample 3. 3.100 / 3. 3.100
libpostproc 55. 3.100 / 55. 3.100
Hyper fast Audio and Video encoder
usage: ffmpeg [options] [[infile options] -i infile]... {[outfile options] outfile}...However, when I compile my c++ code, currently trying to get NTP timestamps from private data in ffmpeg but I need to include their headers ? I have looked into ffmpeg’s
libavformat
folder and it does havertpdec.h
but when I compile it in the command line, i get this error. (trying to include this header forRTSPState
andRTSPStream
as well asRTPDemuxContext
)cf.cpp:11:10: fatal error: libavformat/rtsp.h: No such file or directory
#include <libavformat></libavformat>rtpdec.h>
^~~~~~~~~~~~~~~~~~~~
compilation terminated.This is my code :
#include
#include
#include <iostream>
#include <fstream>
#include <sstream>
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavformat></libavformat>avio.h>
#include <libavformat></libavformat>rtpdec.h>
#include <libswscale></libswscale>swscale.h>
}
int main(int argc, char** argv) {
// Open the initial context variables that are needed
SwsContext *img_convert_ctx;
AVFormatContext* format_ctx = avformat_alloc_context();
AVCodecContext* codec_ctx = NULL;
int video_stream_index;
uint32_t* last_rtcp_ts;
double* base_time;
double* time;
// Register everything
av_register_all();
avformat_network_init();
//open RTSP
if (avformat_open_input(&format_ctx, "rtsp://admin:password@192.168.1.67:554",
NULL, NULL) != 0) {
return EXIT_FAILURE;
}
if (avformat_find_stream_info(format_ctx, NULL) < 0) {
return EXIT_FAILURE;
}
//search video stream
for (int i = 0; i < format_ctx->nb_streams; i++) {
if (format_ctx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)
video_stream_index = i;
}
AVPacket packet;
av_init_packet(&packet);
//open output file
AVFormatContext* output_ctx = avformat_alloc_context();
AVStream* stream = NULL;
int cnt = 0;
//start reading packets from stream and write them to file
av_read_play(format_ctx); //play RTSP
// Get the codec
AVCodec *codec = NULL;
codec = avcodec_find_decoder(AV_CODEC_ID_H264);
if (!codec) {
exit(1);
}
// Add this to allocate the context by codec
codec_ctx = avcodec_alloc_context3(codec);
avcodec_get_context_defaults3(codec_ctx, codec);
avcodec_copy_context(codec_ctx, format_ctx->streams[video_stream_index]->codec);
std::ofstream output_file;
if (avcodec_open2(codec_ctx, codec, NULL) < 0)
exit(1);
img_convert_ctx = sws_getContext(codec_ctx->width, codec_ctx->height,
codec_ctx->pix_fmt, codec_ctx->width, codec_ctx->height, AV_PIX_FMT_RGB24,
SWS_BICUBIC, NULL, NULL, NULL);
int size = avpicture_get_size(AV_PIX_FMT_YUV420P, codec_ctx->width,
codec_ctx->height);
uint8_t* picture_buffer = (uint8_t*) (av_malloc(size));
AVFrame* picture = av_frame_alloc();
AVFrame* picture_rgb = av_frame_alloc();
int size2 = avpicture_get_size(AV_PIX_FMT_RGB24, codec_ctx->width,
codec_ctx->height);
uint8_t* picture_buffer_2 = (uint8_t*) (av_malloc(size2));
avpicture_fill((AVPicture *) picture, picture_buffer, AV_PIX_FMT_YUV420P,
codec_ctx->width, codec_ctx->height);
avpicture_fill((AVPicture *) picture_rgb, picture_buffer_2, AV_PIX_FMT_RGB24,
codec_ctx->width, codec_ctx->height);
while (av_read_frame(format_ctx, &packet) >= 0 && cnt < 1000) { //read ~ 1000 frames
RTSPState* rt = format_ctx->priv_data;
RTSPStream *rtsp_stream = rt->rtsp_streams[0];
RTPDemuxContext* rtp_demux_context = rtsp_stream->transport_priv;
uint32_t new_rtcp_ts = rtp_demux_context->last_rtcp_timestamp;
uint64_t last_ntp_time = 0;
if (new_rtcp_ts != *last_rtcp_ts) {
*last_rtcp_ts = new_rtcp_ts;
last_ntp_time = rtp_demux_context->last_rtcp_ntp_time;
uint32_t seconds = ((last_ntp_time >> 32) & 0xffffffff) - 2208988800;
uint32_t fraction = (last_ntp_time & 0xffffffff);
double useconds = ((double) fraction / 0xffffffff);
*base_time = seconds + useconds;
uint32_t d_ts = rtp_demux_context->timestamp - *last_rtcp_ts;
*time = *base_time + d_ts / 90000.0;
std::cout << "Time is: " << *time << std::endl;
}
std::cout << "1 Frame: " << cnt << std::endl;
if (packet.stream_index == video_stream_index) { //packet is video
std::cout << "2 Is Video" << std::endl;
if (stream == NULL) { //create stream in file
std::cout << "3 create stream" << std::endl;
stream = avformat_new_stream(output_ctx,
format_ctx->streams[video_stream_index]->codec->codec);
avcodec_copy_context(stream->codec,
format_ctx->streams[video_stream_index]->codec);
stream->sample_aspect_ratio =
format_ctx->streams[video_stream_index]->codec->sample_aspect_ratio;
}
int check = 0;
packet.stream_index = stream->id;
std::cout << "4 decoding" << std::endl;
int result = avcodec_decode_video2(codec_ctx, picture, &check, &packet);
std::cout << "Bytes decoded " << result << " check " << check
<< std::endl;
if (cnt > 100) //cnt < 0)
{
sws_scale(img_convert_ctx, picture->data, picture->linesize, 0,
codec_ctx->height, picture_rgb->data, picture_rgb->linesize);
std::stringstream file_name;
file_name << "test" << cnt << ".ppm";
output_file.open(file_name.str().c_str());
output_file << "P3 " << codec_ctx->width << " " << codec_ctx->height
<< " 255\n";
for (int y = 0; y < codec_ctx->height; y++) {
for (int x = 0; x < codec_ctx->width * 3; x++)
output_file
<< (int) (picture_rgb->data[0]
+ y * picture_rgb->linesize[0])[x] << " ";
}
output_file.close();
}
cnt++;
}
av_free_packet(&packet);
av_init_packet(&packet);
}
av_free(picture);
av_free(picture_rgb);
av_free(picture_buffer);
av_free(picture_buffer_2);
av_read_pause(format_ctx);
avio_close(output_ctx->pb);
avformat_free_context(output_ctx);
return (EXIT_SUCCESS);
}
</sstream></fstream></iostream>The command i use to compile my code :
g++ -w cf.cpp -o cf $(pkg-config --cflags --libs libavformat libswscale libavcodec)
I am a bit new to coding in FFmpeg.
-
ffmpeg terminal : How can I re-encode a video file using all the parameters copied from another video ? [closed]
5 décembre 2022, par LeverI have two videos A and B. How can I re-encode B using ALL the parameters of A ?
I would like to use both A and B as inputs, but all parameters are extracted from A, then B is re-encoded with the A-parameters including both video and audio. Is there a simple command to copy ALL the parameters of A ?


Thanks !


I know the typical parameters of a video includes the resolution, fps and so on. But it seems that I always miss something rather than get ALL the parameters to make A and B the same.


The background is that I need to concatenate about 100 videos, but only a few of which has lower quality than others. I don't want to re-encode all of them. Therefore I'm trying to re-encode the lower ones to match the others, and then I can use the file-level concatenation command avoiding re-encode all of them.


-
Unable to find a suitable output format for ' -filter_complex' -filter_complex : Invalid argument for FFMPEG Terminal
18 août 2023, par heroZeroThis is what I am trying :


ffmpeg -i crop-1.mp4 -i crop-2.mp4 -i crop-2.mp4 -i crop-3.mp4 \ -filter_complex \ "[0:v][1:v]hstack=inputs=2[top] ;\ [2:v][3:v]hstack=inputs=2[bottom] ;[top][bottom]vstack=inputs=2[v]" \ -map "[v]" \ output.mp4