
Recherche avancée
Médias (1)
-
Richard Stallman et le logiciel libre
19 octobre 2011, par
Mis à jour : Mai 2013
Langue : français
Type : Texte
Autres articles (66)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
List of compatible distributions
26 avril 2011, parThe table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...) -
Sélection de projets utilisant MediaSPIP
29 avril 2011, parLes exemples cités ci-dessous sont des éléments représentatifs d’usages spécifiques de MediaSPIP pour certains projets.
Vous pensez avoir un site "remarquable" réalisé avec MediaSPIP ? Faites le nous savoir ici.
Ferme MediaSPIP @ Infini
L’Association Infini développe des activités d’accueil, de point d’accès internet, de formation, de conduite de projets innovants dans le domaine des Technologies de l’Information et de la Communication, et l’hébergement de sites. Elle joue en la matière un rôle unique (...)
Sur d’autres sites (10342)
-
FFserver : cannot connect via rtsp
27 avril 2016, par newfoundstoryso im currrently trying to stream my windows desktop using ffmpeg into a raspberry pi running ffserver.
The client software im using needs to use RTSP, however i cannot connect to the stream no matter what i try.
I even used VLC in the messages it just says it cannot connect to the stream.
Any help would be greatly appreciated !
Im attempting to access the stream with rtsp :// 169.254.70.227 :8544/test.flv, as soon as i do it stops the ffmpeg feedFFserver conf
`RTSPPort 8544
HTTPPort 8090 # Port to bind the server to
HTTPBindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 10000 # Maximum bandwidth per client
# set this high enough to exceed stream bitrate
CustomLog - # Remove this if you want FFserver to daemoni$
<feed> # This is the input feed where FFmpeg will send
File ./feed1.ffm # video stream.
FileMaxSize 100000K # Maximum file size for buffering video
ACL allow 192 .168.0.8
ACL allow 192 .168.0.17
ACL allow 169 .254.70.227
ACL allow 169 .254.9.29
ACL allow 169 .254.165.231
ACL allow 10 .14.2.197
ACL allow 192 .168.0.13
ACL allow 10 .14.2.197
ACL allow 192.16 8.0.13
ACL allow 192.1 68.1.3
ACL allow 192. 168.1.4
ACL allow 192 .168.1.2
</feed>
<stream> # Output stream URL definition
Format rtp
Feed feed1.ffm
NoAudio
# Video settings
VideoCodec libx264
VideoSize 720x576 # Video resolution
VideoBufferSize 2000
VideoFrameRate 30 # Video FPS
# Parameters passed to encoder
AVOptionVideo qmin 10
AVOptionVideo qmax 42
PreRoll 15
StartSendOnKey
MulticastAddress 224 .124.0.1
MulticastPort 5000
MulticastTTL 16
VideoBitRate 450 # Video bitrate
</stream>
<stream> # Server status URL
Format status
# Only allow local people to get the status
ACL allow localhost
# Only allow local people to get the status
ACL allow localhost
ACL allow 192.168. 0.0 192.168. 255.255
</stream>
<redirect html="html"> # Just an URL redirect for index
# Redirect index.html to the appropriate site
URL http ://www. ffmpeg .org/
</redirect>`FFmpeg feed
ffmpeg -rtbufsize 2100M -f dshow -r 29.970 -i video=screen-capture-recorder -c video=screen-capture-recorder.flv http :// 169 .254.70.227:8090/ feed1.ffm
FFserver output
configuration: --arch=armel --target-os=linux --enable-gpl --enable-libx264 --enable-nonfree
libavutil 55. 22.101 / 55. 22.101
libavcodec 57. 35.100 / 57. 35.100
libavformat 57. 34.103 / 57. 34.103
libavdevice 57. 0.101 / 57. 0.101
libavfilter 6. 44.100 / 6. 44.100
libswscale 4. 1.100 / 4. 1.100
libswresample 2. 0.101 / 2. 0.101
libpostproc 54. 0.100 / 54. 0.100
/etc/ffserver.conf:48: Setting default value for video bit rate tolerance = 112500. Use NoDefaults to disable it.
/etc/ffserver.conf:48: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it.
/etc/ffserver.conf:48: Setting default value for video max rate = 20744848. Use NoDefaults to disable it.
Wed Apr 27 10:33:46 2016 FFserver started.
Wed Apr 27 10:33:46 2016 224.124.0.1:5000 - - "PLAY test.flv/streamid=0 RTP/MCAST"
Wed Apr 27 10:33:46 2016 [rtp @ 0x13d4660]Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.
Wed Apr 27 10:33:49 2016 169.254.165.231 - - [GET] "/feed1. ffm HTTP/1.1" 200 4175FFMpeg output
Stream mapping:
Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
Press [q] to stop, [?] for help
[swscaler @ 000000000252f5e0] Warning: data is not aligned! This can lead to a speedloss
av_interleaved_write_frame(): Unknown errortime=00:00:05.00 bitrate= 249.0kbits/s speed=0.393x
Error writing trailer of http: //169. 254.70.227:8090/feed1.ffm: Error number -10053 occurredframe= 204 fps= 15 q=26.0 Lsize= 164kB time=00:00:05.03 bitrate= 266.9kbits/s speed=0.365x
video:155kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 5.475512%
[libx264 @ 00000000025187e0] frame I:1 Avg QP:34.24 size: 32151
[libx264 @ 00000000025187e0] frame P:59 Avg QP:27.14 size: 1807
[libx264 @ 00000000025187e0] frame B:144 Avg QP:32.16 size: 168
[libx264 @ 00000000025187e0] consecutive B-frames: 4.9% 2.0% 2.9% 90.2%
[libx264 @ 00000000025187e0] mb I I16..4: 26.0% 23.1% 50.9%
[libx264 @ 00000000025187e0] mb P I16..4: 1.9% 1.6% 1.1% P16..4: 4.3% 0.6% 0.4% 0.0% 0.0% skip:90.2%
[libx264 @ 00000000025187e0] mb B I16..4: 0.2% 0.1% 0.1% B16..8: 3.1% 0.1% 0.0% direct: 0.1% skip:96.3% L0:26.0% L1:73.5% BI: 0.5%
[libx264 @ 00000000025187e0] final ratefactor: 24.13
[libx264 @ 00000000025187e0] 8x8 transform intra:31.8% inter:47.1%
[libx264 @ 00000000025187e0] coded y,u,v intra: 28.1% 8.2% 6.3% inter: 0.6% 0.2% 0.1%
[libx264 @ 00000000025187e0] i16 v,h,dc,p: 30% 63% 6% 1%
[libx264 @ 00000000025187e0] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 16% 16% 63% 1% 0% 0% 1% 0% 3%
[libx264 @ 00000000025187e0] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 28% 39% 15% 2% 2% 3% 4% 3% 4%
[libx264 @ 00000000025187e0] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 00000000025187e0] ref P L0: 66.3% 13.1% 16.9% 3.6%
[libx264 @ 00000000025187e0] ref B L0: 66.6% 29.7% 3.8%
[libx264 @ 00000000025187e0] ref B L1: 91.7% 8.3%
[libx264 @ 00000000025187e0] kb/s:191.78
Conversion failed! -
Upload to S3 bucket from FFMpegCore
18 avril 2022, par user1765862I'm using FFMpegCore to create image from the video on 5th second.


var inputFile = "images/preview_video.mp4";
var processedFile = "path-to-s3-bucket";
await FFMpeg.SnapshotAsync(inputFile, processedFile, new Size(800, 600), TimeSpan.FromMilliseconds(5000));



How can upload this processed file (image) to my s3 bucket using FFMPegCore Snapshot ?


-
FFMPEG convert NV12 format to NV12 with the same height and width
7 septembre 2022, par Chun WangI want to use FFmpeg4.2.2 to convert the input NV12 format to output NV12 format with the same height and width. I used sws_scale conversion, but the output frame's colors are all green.


P.S. It seems no need to use swscale to get the same width,same height and same format frame,but it is neccessary in my project for dealing with other frames.


I have successfully converted the input NV12 format to output NV12 format with the different height and width, the output frame's colors were right.But I FAILED to convert NV12 to NV12 with the same height and width. It was so weird, I couldn't know why :(


I want to know what the reason is and what I should do.
The following is my code.swsCtx4 was used for converting NV12 format to output NV12 format. Others were used for other formats converted test.
Thank you for you help


//the main code is 
 AVFrame* frame_nv12 = av_frame_alloc();
 frame_nv12->width = in_width;
 frame_nv12->height = in_height;
 frame_nv12->format = AV_PIX_FMT_NV12;
 uint8_t* frame_buffer_nv12 = (uint8_t*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_NV12, in_width, in_height , 1));
 av_image_fill_arrays(frame_nv12->data, frame_nv12->linesize, frame_buffer_nv12, AV_PIX_FMT_NV12, in_width, in_height, 1);


 AVFrame* frame2_nv12 = av_frame_alloc();
 frame2_nv12->width = in_width1;
 frame2_nv12->height = in_height1;
 frame2_nv12->format = AV_PIX_FMT_NV12;

 uint8_t* frame2_buffer_nv12 = (uint8_t*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_NV12, in_width1, in_height1, 1));
 av_image_fill_arrays(frame2_nv12->data, frame2_nv12->linesize, frame2_buffer_nv12, AV_PIX_FMT_NV12, in_width1, in_height1, 1);
 
 SwsContext* swsCtx4 = nullptr;
 swsCtx4 = sws_getContext(in_width, in_height, AV_PIX_FMT_NV12, in_width1, in_height1, AV_PIX_FMT_NV12,
 SWS_BILINEAR | SWS_PRINT_INFO, NULL, NULL, NULL);
 printf("swsCtx4\n");
 
 ret = sws_scale(swsCtx4, frame_nv12->data, frame_nv12->linesize, 0, frame_nv12->height, frame2_nv12->data, frame2_nv12->linesize);
 if (ret < 0) {
 printf("sws_4scale failed\n");
 }
 



//the complete code
extern "C" {
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>imgutils.h>
#include <libswscale></libswscale>swscale.h>
}
#include <seeker></seeker>loggerApi.h>
#include "seeker/common.h"
#include <iostream>

//解决原因:pts设置为0,dts设置为0
#define FILE_SRC "testPicFilter.yuv" //源文件
#define FILE_DES "test11.yuv" //源文件

int count = 0;


int main(int argc, char* argv[])
{
 av_register_all();

 int ret = 0;
 
 //std::this_thread::sleep_for(std::chrono::milliseconds(5000));
 int count1 = 1;
 int piccount;
 int align = 1;


 /*打开输入yuv文件*/
 FILE* fp_in = fopen(FILE_SRC, "rb+");
 if (fp_in == NULL)
 {
 printf("文件打开失败\n");
 return 0;
 }
 int in_width = 640;
 int in_height = 360;
 int in_width1 = 640;
 int in_height1 = 360;
 


 /*处理后的文件*/
 FILE* fp_out = fopen(FILE_DES, "wb+");
 if (fp_out == NULL)
 {
 printf("文件创建失败\n");
 return 0;
 }
 char buff[50];

 AVFrame* frame_in = av_frame_alloc();
 unsigned char* frame_buffer_in;
 frame_buffer_in = (unsigned char*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, in_width, in_height, 1));
 /*根据图像设置图像指针和内存对齐方式*/
 av_image_fill_arrays(frame_in->data, frame_in->linesize, frame_buffer_in, AV_PIX_FMT_YUV420P, in_width, in_height, 1);

 frame_in->width = in_width;
 frame_in->height = in_height;
 frame_in->format = AV_PIX_FMT_YUV420P;


 //输入yuv转成frame_nv12
 AVFrame* frame_nv12 = av_frame_alloc();
 frame_nv12->width = in_width;
 frame_nv12->height = in_height;
 frame_nv12->format = AV_PIX_FMT_NV12;
 uint8_t* frame_buffer_nv12 = (uint8_t*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_NV12, in_width, in_height , 1));
 av_image_fill_arrays(frame_nv12->data, frame_nv12->linesize, frame_buffer_nv12, AV_PIX_FMT_NV12, in_width, in_height, 1);


 AVFrame* frame2_nv12 = av_frame_alloc();
 frame2_nv12->width = in_width1;
 frame2_nv12->height = in_height1;
 frame2_nv12->format = AV_PIX_FMT_NV12;

 uint8_t* frame2_buffer_nv12 = (uint8_t*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_NV12, in_width1, in_height1, 1));
 av_image_fill_arrays(frame2_nv12->data, frame2_nv12->linesize, frame2_buffer_nv12, AV_PIX_FMT_NV12, in_width1, in_height1, 1);


 
 //输入rgb转成yuv
 AVFrame* frame_yuv = av_frame_alloc();
 frame_yuv->width = in_width;
 frame_yuv->height = in_height;
 frame_yuv->format = AV_PIX_FMT_YUV420P;
 uint8_t* frame_buffer_yuv = (uint8_t*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, in_width, in_height, 1));
 av_image_fill_arrays(frame_yuv->data, frame_yuv->linesize, frame_buffer_yuv,
 AV_PIX_FMT_YUV420P, in_width, in_height, 1);



 SwsContext* swsCtx = nullptr;
 swsCtx = sws_getContext(in_width, in_height, AV_PIX_FMT_YUV420P, in_width, in_height, AV_PIX_FMT_NV12,
 SWS_BILINEAR | SWS_PRINT_INFO, NULL, NULL, NULL);
 printf("swsCtx\n");

 SwsContext* swsCtx4 = nullptr;
 swsCtx4 = sws_getContext(in_width, in_height, AV_PIX_FMT_NV12, in_width1, in_height1, AV_PIX_FMT_NV12,
 SWS_BILINEAR | SWS_PRINT_INFO, NULL, NULL, NULL);
 printf("swsCtx4\n");

 
 SwsContext* swsCtx2 = nullptr;
 swsCtx2 = sws_getContext(in_width1, in_height1, AV_PIX_FMT_NV12, in_width, in_height, AV_PIX_FMT_YUV420P,
 SWS_BILINEAR | SWS_PRINT_INFO, NULL, NULL, NULL);
 printf("swsCtx2\n");





 while (1)
 {


 count++;

 if (fread(frame_buffer_in, 1, in_width * in_height * 3 / 2, fp_in) != in_width * in_height * 3 / 2)
 {
 break;
 }

 frame_in->data[0] = frame_buffer_in;
 frame_in->data[1] = frame_buffer_in + in_width * in_height;
 frame_in->data[2] = frame_buffer_in + in_width * in_height * 5 / 4;


 //转NV12格式
 int ret = sws_scale(swsCtx, frame_in->data, frame_in->linesize, 0, frame_in->height, frame_nv12->data, frame_nv12->linesize);
 if (ret < 0) {
 printf("sws_scale swsCtx failed\n");
 }


 ret = sws_scale(swsCtx4, frame_nv12->data, frame_nv12->linesize, 0, frame_nv12->height, frame2_nv12->data, frame2_nv12->linesize);
 if (ret < 0) {
 printf("sws_scale swsCtx4 failed\n");
 }
 

 if (ret > 0) {
 
 int ret2 = sws_scale(swsCtx2, frame2_nv12->data, frame2_nv12->linesize, 0, frame2_nv12->height, frame_yuv->data, frame_yuv->linesize);
 if (ret2 < 0) {
 printf("sws_scale swsCtx2 failed\n");
 }
 I_LOG("frame_yuv:{},{}", frame_yuv->width, frame_yuv->height);

 
 //I_LOG("frame_yuv:{}", frame_yuv->format);

 if (frame_yuv->format == AV_PIX_FMT_YUV420P)
 {

 for (int i = 0; i < frame_yuv->height; i++)
 {
 fwrite(frame_yuv->data[0] + frame_yuv->linesize[0] * i, 1, frame_yuv->width, fp_out);
 }
 for (int i = 0; i < frame_yuv->height / 2; i++)
 {
 fwrite(frame_yuv->data[1] + frame_yuv->linesize[1] * i, 1, frame_yuv->width / 2, fp_out);
 }
 for (int i = 0; i < frame_yuv->height / 2; i++)
 {
 fwrite(frame_yuv->data[2] + frame_yuv->linesize[2] * i, 1, frame_yuv->width / 2, fp_out);
 }
 printf("yuv to file\n");
 }
 }

 }


 fclose(fp_in);
 fclose(fp_out);
 av_frame_free(&frame_in);
 av_frame_free(&frame_nv12);
 av_frame_free(&frame_yuv);
 sws_freeContext(swsCtx);
 sws_freeContext(swsCtx2);
 sws_freeContext(swsCtx4);

 //std::this_thread::sleep_for(std::chrono::milliseconds(8000));

 return 0;

}



</iostream>