
Recherche avancée
Autres articles (54)
-
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Contribute to translation
13 avril 2011You can help us to improve the language used in the software interface to make MediaSPIP more accessible and user-friendly. You can also translate the interface into any language that allows it to spread to new linguistic communities.
To do this, we use the translation interface of SPIP where the all the language modules of MediaSPIP are available. Just subscribe to the mailing list and request further informantion on translation.
MediaSPIP is currently available in French and English (...) -
Les tâches Cron régulières de la ferme
1er décembre 2010, parLa gestion de la ferme passe par l’exécution à intervalle régulier de plusieurs tâches répétitives dites Cron.
Le super Cron (gestion_mutu_super_cron)
Cette tâche, planifiée chaque minute, a pour simple effet d’appeler le Cron de l’ensemble des instances de la mutualisation régulièrement. Couplée avec un Cron système sur le site central de la mutualisation, cela permet de simplement générer des visites régulières sur les différents sites et éviter que les tâches des sites peu visités soient trop (...)
Sur d’autres sites (8692)
-
FFMPEG error when saving NDI stream to mp4
22 septembre 2020, par user1163234I am trying to record a NDI stream to a MP4 file(I want to stream the mp4 to rtmp endpoint after saving file). However I am getting this error when running this class. https://github.com/WalkerKnapp/devolay/blob/master/examples/src/main/java/com/walker/devolayexamples/recording/RecordingExample.java


Error :


Connecting to source: DESKTOP-GQNH46Q (Ari PC output)
[file @ 0x7fb4f2d48540] Setting default whitelist 'file,crypto'
x265 [info]: HEVC encoder version 0.0
x265 [info]: build info [Mac OS X][clang 8.1.0][64 bit] 8bit+10bit+12bit
x265 [info]: using cpu capabilities: MMX2 SSE2Fast LZCNT SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2
x265 [info]: Main profile, Level-3.1 (Main tier)
x265 [info]: Thread pool created using 8 threads
x265 [info]: Slices : 1
x265 [info]: frame threads / pool features : 1 / wpp(12 rows)
x265 [info]: Coding QT: max CU size, min CU size : 64 / 8
x265 [info]: Residual QT: max TU size, max depth : 32 / 1 inter / 1 intra
x265 [info]: ME / range / subpel / merge : hex / 57 / 2 / 3
x265 [info]: Keyframe min / max / scenecut / bias: 1 / 250 / 40 / 5.00
x265 [info]: Lookahead / bframes / badapt : 20 / 4 / 2
x265 [info]: b-pyramid / weightp / weightb : 1 / 1 / 0
x265 [info]: References / ref-limit cu / depth : 3 / off / on
x265 [info]: AQ: mode / str / qg-size / cu-tree : 2 / 1.0 / 32 / 1
x265 [info]: Rate Control / qCompress : CRF-28.0 / 0.60
x265 [info]: tools: rd=3 psy-rd=2.00 early-skip rskip signhide tmvp b-intra
x265 [info]: tools: strong-intra-smoothing lslices=4 deblock sao
[SWR @ 0x7fb4f8893000] Using fltp internally between filters
[mp4 @ 0x7fb4f3820200] Application provided invalid, non monotonically increasing dts to muxer in stream 0: 135 >= 107
Failed to write video flush packet, skipping: Invalid argument
configurationVersion: 1
general_profile_space: 0
general_tier_flag: 0
general_profile_idc: 1
general_profile_compatibility_flags: 0x60000000
general_constraint_indicator_flags: 0x900000000000
general_level_idc: 93
min_spatial_segmentation_idc: 0
parallelismType: 0
chromaFormat: 1
bitDepthLumaMinus8: 0
bitDepthChromaMinus8: 0
avgFrameRate: 0
constantFrameRate: 0
numTemporalLayers: 1
temporalIdNested: 1
lengthSizeMinusOne: 3
numOfArrays: 4
array_completeness[0]: 0
NAL_unit_type[0]: 32
numNalus[0]: 1
nalUnitLength[0][0]: 24
array_completeness[1]: 0
NAL_unit_type[1]: 33
numNalus[1]: 1
nalUnitLength[1][0]: 41
array_completeness[2]: 0
NAL_unit_type[2]: 34
numNalus[2]: 1
nalUnitLength[2][0]: 7
array_completeness[3]: 0
NAL_unit_type[3]: 39
numNalus[3]: 1
nalUnitLength[3][0]: 2050
[AVIOContext @ 0x7fb4f2d48640] Statistics: 2 seeks, 4 writeouts
x265 [info]: frame I: 1, Avg QP:14.03 kb/s: 16.33 
x265 [info]: frame P: 36, Avg QP:21.67 kb/s: 0.04 
x265 [info]: frame B: 73, Avg QP:24.22 kb/s: 0.03 
x265 [info]: Weighted P-Frames: Y:0.0% UV:0.0%
x265 [info]: consecutive B-frames: 28.9% 13.2% 18.4% 2.6% 36.8% 

encoded 110 frames in 15.13s (7.27 fps), 0.18 kb/s, Avg QP:23.29
[aac @ 0x7fb4f6aa6200] Qavg: 65536.000



-
AV_PIX_FMT_YUVJ422P to jpeg conversion
4 mars 2019, par user3743908i am able to convert image from AV_PIX_FMT_YUVJ422P to jpeg format (below Code) but the resultant image having green shade on complete bottom half plz suggest where i am doing wrong.
Following step i have taken-
Initially i have AV_PIX_FMT_UYVY422 image from camera, i have convert it in AV_PIX_FMT_YUVJ422P format and able to see this image on http://rawpixels.net/ the parameters shown by website is size 2448X2050, Bpp1= 8,Bpp2 = 8 and Bpp3 = 8,alignment 1, SubSampling H =2, and SubSampling V = 1, format : YUV422P
so input image is Correct AV_PIX_FMT_YUVJ422P format. & also able to see on "YUV image viewer Software" using YUV422 format. -
Now i am trying to convert it in jpeg format using below Code and attached is the resultant Image having green shade on complete bottom half.
AVFormatContext* pFormatCtx;
AVOutputFormat* fmt;
AVStream* video_st;
AVCodecContext* pCodecCtx;
AVCodec* pCodec;
uint8_t* picture_buf;
AVFrame* picture;
AVPacket pkt;
int y_size;
int size;
int got_picture=0;
int ret=0;
int main( int argc, char* argv[] )
{
FILE *in_file = NULL;
unsigned int in_width = 2448;
unsigned int in_height = 2050;
const char* out_file = "encoded_pic.jpg";
in_file = fopen("c:\\test_Planar.yuv","rb");
if(in_file == NULL) { printf("\n\tFile Opening error...!!"); exit(1); }
else printf("\n\tYUV File Open Sucessfully...!!\n\n");
av_register_all(); // Loads the whole database of available codecs and formats.
pFormatCtx = avformat_alloc_context();
fmt = NULL;
fmt = av_guess_format("mjpeg",NULL,NULL);
pFormatCtx->oformat = fmt;
//------Output URL-------------------------
if (avio_open(&pFormatCtx->pb,out_file, AVIO_FLAG_READ_WRITE) < 0)
{
printf("Couldn't open output file.");
return -1;
}
video_st = avformat_new_stream(pFormatCtx, 0);
if (video_st==NULL) return -1;
pCodecCtx = video_st->codec;
pCodecCtx->codec_id = fmt->video_codec;
pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
pCodecCtx->pix_fmt = AV_PIX_FMT_YUVJ422P;
//--------------------------MY SOURCE PIXEL FORMAT--------------
pCodecCtx->width = in_width;
pCodecCtx->height = in_height;
pCodecCtx->time_base.num = 1;
pCodecCtx->time_base.den = 1;//25;
//Output some information
av_dump_format(pFormatCtx, 0, out_file, 1);
// Determine if desired video encoder is installed
pCodec = avcodec_find_encoder(pCodecCtx->codec_id);
if (!pCodec)
{
printf("Codec not found.");
return -1;
}
printf("\nCodec Identified done\n");
if (avcodec_open2(pCodecCtx, pCodec,NULL) < 0){
printf("Could not open codec.\n");
return -1;
}
picture = av_frame_alloc();
size = avpicture_get_size(pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
picture_buf = (uint8_t *)av_malloc(size);
if (!picture_buf) return -1;
avpicture_fill((AVPicture *)picture, picture_buf, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height);
printf("\t\nWrite Header..");
avformat_write_header(pFormatCtx,NULL);
y_size = pCodecCtx->width * pCodecCtx->height;
av_new_packet(&pkt,y_size*3);
//Read YUV
if (fread(picture_buf, 1, y_size*3/2, in_file) <=0)
{
printf("Could not read input file.");
return -1;
}
//--------------------------------------------input image format UYVY
picture->data[0] = picture_buf; // Y
picture->data[1] = picture_buf+ y_size; // U
picture->data[2] = picture_buf+ y_size*5/4; // V
//-----------------------------------------------
printf("\t\n Encode the image..\n");
ret = avcodec_encode_video2(pCodecCtx, &pkt,picture, &got_picture);
if(ret < 0)
{
printf("Encode Error.\n");
return -1;
}
if (got_picture==1)
{
pkt.stream_index = video_st->index;
ret = av_write_frame(pFormatCtx, &pkt);
}
av_free_packet(&pkt);
//Write Trailer
av_write_trailer(pFormatCtx);
printf("Encode Successful.\n");
if (video_st)
{
avcodec_close(video_st->codec);
av_free(picture);
av_free(picture_buf);
}
avio_close(pFormatCtx->pb);
avformat_free_context(pFormatCtx);
fclose(in_file);
printf("\n\tYUV File Close Sucessfully...!!");
}
Resultant output jpeg encoded image from yuvj422p image having green shade
-
-
Packet (AV_PIX_FMT_UYVY422) to Planar (AV_PIX_FMT_YUVJ422P) format conversion
25 août 2017, par user3743908My image format is "YUV422_8_UYVY" which is packed AV_PIX_FMT_UYVY422 format, i am trying to convert it in Planar "AV_PIX_FMT_YUVJ422P", but not able to succeed yet, below is the code on which i am working.
error message : [swscaler @ 004b3fa0] deprecetd pixel format used, make sure you did set range correctly
resultant image (file ) having 0 k size
what would be the last argument of av_image_alloc() for conversion like 16,32 etc
my aim to convert packet yuv image in planar yuv format
static AVCodecContext *pCodecCtx;
static AVFormatContext *pFormatCtx;
static AVCodec *pCodec;
static AVOutputFormat* fmt;
static AVFrame *RawPic;
static AVFrame *ScalePic;
static AVPacket pkt;
static AVStream* video_st;
static FILE *file;
static struct SwsContext *sws_ctx;
enum AVPixelFormat src_pix_fmt = AV_PIX_FMT_UYVY422;
enum AVPixelFormat dst_pix_fmt = AV_PIX_FMT_YUVJ422P;
int main( ) {
FILE *in_file = NULL; //packed Source
FILE *out_file = NULL; //planar output
int in_width = 2448; //YUV's width
int in_height = 2050; //YUV's heigh
int out_width = 2448; //YUV's width
int out_height = 2050; //YUV's heigh
unsigned long int ret;
in_file = fopen("c:\\yuv422_8_uyvy.yuv","rb"); //Source Input File
if(in_file == NULL) { printf("\n\tinput File Opening error...!!"); exit(1); }
out_file = fopen("d:\\test_Planar.yuv", "wb"); //Source Input File
if(out_file == NULL) { printf("\n\toutput File Opening error...!!"); exit(1); }
else { printf("\n\tOutput File Created...!!"); }
//------Loads the whole database of available codecs and formats------
av_register_all();
printf("\t\n\tCodac database Loaded...\n");
//------Contex Variable assignment--------------------------------
pFormatCtx = avformat_alloc_context();
fmt = NULL;
fmt = av_guess_format("mjpeg",NULL,NULL);
pFormatCtx->oformat = fmt;
video_st = avformat_new_stream(pFormatCtx, 0); if (video_st==NULL) return -1;
pCodecCtx = video_st->codec;
pCodecCtx->codec_id = fmt->video_codec;
pCodecCtx->codec_type = AVMEDIA_TYPE_VIDEO;
pCodecCtx->pix_fmt = src_pix_fmt;
printf("\t\n\tContex Variable assigned...\n");
//------Allocate Source Image Buffer--------------------------------
AVFrame *RawPic = av_frame_alloc();
if(!RawPic) { printf("\nCould not allocate Raw Image frame\n"); exit(1);}
RawPic->format = pCodecCtx->pix_fmt;
RawPic->width = in_width;
RawPic->height = in_height;
ret = av_image_alloc(RawPic->data,RawPic->linesize,in_width,in_height,src_pix_fmt, 16);
if(ret < 0) { printf("\nCould not allocate raw picture buffer\n"); exit(1);}
printf("\n\tAllocate Source Image Buffer");
//------Allocate Desitnation Image Buffer-------------------
AVFrame *ScalePic = av_frame_alloc();
if(!ScalePic) { printf("\nCould not allocate Scale Image frame\n"); exit(1);}
ScalePic->format = pCodecCtx->pix_fmt;
ScalePic->width = out_width;
ScalePic->height = out_height;
ret = av_image_alloc(ScalePic->data,ScalePic->linesize,out_width,out_height,dst_pix_fmt, 32);
if(ret < 0) { printf("\nCould not allocate Scale picture buffer\n"); exit(1);}
dst_bufsize = ret;
printf("\n\tAllocate Destination Image Buffer");
//------Create scaling context------------------------------sws_getContex
printf("\t\n\tCreating Scaling context..[sws_getContext]\n");
sws_ctx = sws_getContext( in_width, in_height, src_pix_fmt,
out_width, out_height, dst_pix_fmt,
SWS_BICUBIC, NULL, NULL, NULL);
if(!sws_ctx) { printf("\nContext Error..\n"); }
printf("\t\n\tScaling context...Created\n");
//------Create scaling context---OR CONVERTED TO DESTINATION FORMAT--
sws_scale(sws_ctx, RawPic->data, RawPic->linesize, 0, in_height, ScalePic->data, ScalePic->linesize);
printf("\t\n\tCreating Scaling context...sws_scale...done\n");
int num_bytes = avpicture_get_size(src_pix_fmt,in_width,in_height);
uint8_t* ScalePic_Buffer = (uint8_t *)av_malloc(num_bytes*sizeof(int8_t));
avpicture_fill((AVPicture*)ScalePic,ScalePic_Buffer,AV_PIX_FMT_YUVJ422P,out_width,out_height);
//-----Write Scale Image to outputfile----------------------------
fwrite(ScalePic->data,1,dst_bufsize,out_file);
//---Release all memory and close file----------------------------------
fclose(in_file);
fclose(out_file);
avcodec_close(pCodecCtx);
av_free(pCodecCtx);
av_freep(&RawPic->data[0]);
av_frame_free(&RawPic);
av_freep(&ScalePic->data[0]);
av_frame_free(&ScalePic);
av_frame_free(&RawPic);
printf("\n\n");
system("PAUSE");
exit(1);
}