
Recherche avancée
Autres articles (106)
-
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...)
Sur d’autres sites (11393)
-
Compile ffmpeg and x264 with —enable-shared
20 août 2016, par John AllardI am trying to compile ffmpeg with shared-library support, because when I compiled it statically it ended up making huge libraries, like
libavcodec.a
with 75MB in size. Any programs that I compiled againstlibavcodec
was at least 50MB in size. This only happens on my RaspberryPi though, if I dols -lsth $(which ffmpeg)
on my RPI3 I get
15M -rwxr-xr-x 1 pi pi 15M Jul 29 21:49 /home/pi/camiocam/cam/binaries/bin//ffmpeg
While if I do the same on my macbook I get this output
488 -r-xr-xr-x 1 john admin 242K Jul 26 19:34 /usr/local/Cellar/ffmpeg/3.1.1/bin/ffmpeg
To get shared library support I did the following
# download and install libx264
cd ~/ffmpeg_build
git clone git://git.videolan.org/x264.git
cd x264
PATH="$HOME/bin:$PATH" ./configure --host=arm-unknown-linux-gnueabi --prefix="$HOME/ffmpeg_build" --bindir="$HOME/bin" --enable-shared --disable-opencl
# PATH="$HOME/bin:$PATH" make -j 8
# PATH="$HOME/bin:$PATH" make install
# ldconfigand ...
# to install ffmpeg
cd ~/ffmpeg_sources
wget http://ffmpeg.org/releases/ffmpeg-snapshot.tar.bz2
tar xjvf ffmpeg-snapshot.tar.bz2
cd ffmpeg
PATH="$HOME/bin:$PATH" PKG_CONFIG_PATH="$HOME/ffmpeg_build/lib/pkgconfig" ./configure --prefix="$HOME/ffmpeg_build" --arch=armel --target-os=linux --extra-cflags="-I$HOME/ffmpeg_build/include" --enable-pic --extra-ldflags="-L$HOME/ffmpeg_build/lib" --bindir="$HOME/bin" --enable-gpl --enable-libx264 --enable-nonfree --enable-shared
PATH="$HOME/bin:$PATH" make -j 8Which gives this output
License: nonfree and unredistributable
Creating config.mak, config.h, and doc/config.texi...
config.h is unchanged
libavutil/avconfig.h is unchanged
libavcodec/bsf_list.c is unchanged
libavformat/protocol_list.c is unchanged
CC libavcodec/libx264.o
POD doc/ffprobe.pod
POD doc/ffmpeg-all.pod
POD doc/ffserver-all.pod
POD doc/ffmpeg.pod
POD doc/ffprobe-all.pod
POD doc/ffserver.pod
MAN doc/ffprobe.1
MAN doc/ffmpeg.1
MAN doc/ffserver.1
MAN doc/ffmpeg-all.1
MAN doc/ffprobe-all.1
MAN doc/ffserver-all.1
GEN libavcodec/libavcodec.ver
LD libavcodec/libavcodec.so.57
AR libavcodec/libavcodec.a
LD libavformat/libavformat.so.57
LD libavfilter/libavfilter.so.6
LD libavdevice/libavdevice.so.57
library.mak:102: recipe for target 'libavdevice/libavdevice.so.57' failed
make: *** [libavdevice/libavdevice.so.57] Killed
make: *** Deleting file 'libavdevice/libavdevice.so.57'I’ve tried to add the
--enable-pic
flag to the ffmpeg configure command but that made no difference. -
lavd/x11grab : fix vertical repositioning
28 mars 2019, par Octavio Alvarezlavd/x11grab : fix vertical repositioning
There is a calculation error in xcbgrab_reposition() that breaks
vertical repositioning on follow_mouse. It made the bottom
reposition occur when moving the mouse lower than N pixels after
the capture bottom edge, instead of before.This commit fixes the calculation to match the documentation.
follow_mouse : centered or number of pixels. The documentation says :
When it is specified with "centered", the grabbing region follows
the mouse pointer and keeps the pointer at the center of region ;
otherwise, the region follows only when the mouse pointer reaches
within PIXELS (greater than zero) to the edge of region. -
Getting and decoding video by RTP H264
29 novembre 2023, par AlekseiKraevParsing RTP H264.
WINAPI C. A queue from C++ has been applied, I repent.


RTP is generated using FFMPEG with the following command :
ffmpeg.exe -f gdigrab -framerate 25 -i desktop -s 853x480 -b:v 120000 -c:v libx264 -f rtp rtp ://127.0.0.1:8080


Parsing the incoming RTP/h264 stream and converting it to an RGB matrix.


video_H264_decode.h


#pragma once
#ifndef _VIDEO_H264_DECODE_SEND_H // Блокируем повторное включение этого модуля
#define _VIDEO_H264_DECODE_SEND_H
//******************************************************************************
// Section include
//******************************************************************************
#include "main.h"
#include 
//******************************************************************************
// Constants
//******************************************************************************

//******************************************************************************
// Type
//******************************************************************************
typedef struct {
 unsigned char* data;
 int size;
}RTPData_DType;

typedef struct 
{
 union
 {
 struct
 {
 char V:2; //Версия
 char P:1; //заполнение
 char X:1; //расширение
 char CC:4; //количество CSRC

 char M:1; //маркер (флаг последнего пакета AU),
 char PT:7; //полезная нагрузка (тип данных носителя полезной нагрузки RTP, H264 = 96)

 short sequence_number; //Порядковый номер: порядковый номер пакета RTP, увеличенный на 1.
 int time_stamp; //временная метка выборки медиа. 
 int SSRC; //Пакет данных имеет одинаковое происхождение.
 };
 unsigned char data[12];
 };
}RTPHeader_DType;
//******************************************************************************
// Global var
//******************************************************************************

//******************************************************************************
// Local function prototype
//******************************************************************************
UCHAR rtp_H264_recive_init(void);
UCHAR RTPStop(void);
//******************************************************************************
// Macros
//******************************************************************************
#define BYTE2_SWAP(X) ((((short)(X) & 0xff00) >> 8) |(((short)(X) & 0x00ff) << 8))
#endif
//******************************************************************************
// ENF OF FILE
//******************************************************************************



video_H264_decode.c


//******************************************************************************
//include
//******************************************************************************
#include "main.h"
/*#include "video_H264_decode.h"
#include 
#include 
#include <chrono>
#include 

#pragma comment(lib, "ws2_32.lib")
#include */
#include <iostream>
#include <queue>

extern "C" {
#include "libavformat/avformat.h"
#include "libavfilter/avfilter.h"
#include "libavdevice/avdevice.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libpostproc/postprocess.h"
#include "libavcodec/avcodec.h"
}

#pragma comment(lib,"avcodec.lib")
#pragma comment(lib,"avdevice.lib")
#pragma comment(lib,"avfilter.lib")
#pragma comment(lib,"avformat.lib")
#pragma comment(lib,"avutil.lib")
#pragma comment(lib,"postproc.lib")
#pragma comment(lib,"swresample.lib")
#pragma comment(lib,"swscale.lib")

#pragma warning(disable: 4996)
//******************************************************************************
// Section for determining the variables used in the module
//******************************************************************************
//------------------------------------------------------------------------------
// Global
//------------------------------------------------------------------------------

//------------------------------------------------------------------------------
// Local
//------------------------------------------------------------------------------
const int inteval = 0x01000000;
BOOL FlagRTPActive = TRUE;

HANDLE hMutexRTPRecive;
HANDLE hSemaphoreRTP;
HANDLE hTreadRTPRecive;
HANDLE hTreadRTPDecode;


SOCKET RTPSocket; //socket UDP RTP
RTPData_DType packet;
RTPData_DType FU_buffer = { 0 };

std::queue q;
std::set<int> seq;

AVFormatContext* pAVFormatContext;
AVCodecContext* pAVCodecContext;
const AVCodec* pAVCodec;
AVFrame* pAVFrame;
AVFrame* AVFrameRGG;
SwsContext* pSwsContext;
AVPacket *pAVPacket;
AVCodecParserContext* pAVCodecParserContext;

UINT port;
//******************************************************************************
// Section of prototypes of local functions
//******************************************************************************
DWORD WINAPI rtp_H264_recive_Procedure(CONST LPVOID lpParam);
DWORD WINAPI rtp_decode_Procedure(CONST LPVOID lpParam);
char RTPSocketInit(void);
void RTPPacketParser(void);
char RTPSocketRecive(void);
void Decode_NaluToAVFrameRGG();
//******************************************************************************
// Section of the description of functions
//******************************************************************************
UCHAR rtp_H264_recive_init(void)
{
 hSemaphoreRTP = CreateSemaphore(
 NULL, // default security attributes
 0, // initial count
 1, // maximum count
 NULL); // unnamed semaphore

 hMutexRTPRecive = CreateMutex(
 NULL, // default security attributes
 FALSE, // initially not owned
 NULL); // unnamed mutex

 hTreadRTPRecive = CreateThread(NULL, NULL, rtp_H264_recive_Procedure, NULL, NULL, NULL);
 hTreadRTPDecode = CreateThread(NULL, NULL, rtp_decode_Procedure, NULL, NULL, NULL);

 return 0;
}
//------------------------------------------------------------------------------
UCHAR RTPStop(void)
{
 FlagRTPActive = FALSE;

 if (hSemaphoreRTP) CloseHandle(hSemaphoreRTP);
 if (hMutexRTPRecive) CloseHandle(hMutexRTPRecive);
 if (hTreadRTPRecive) CloseHandle(hTreadRTPRecive);

 closesocket(RTPSocket); 

 return 0;
}
//------------------------------------------------------------------------------
DWORD WINAPI rtp_H264_recive_Procedure(CONST LPVOID lpParam)
{
 while (RTPSocketInit() == 0)
 Sleep(2000);
 
 while (1)
 {
 RTPSocketRecive();
 RTPPacketParser();
 ReleaseSemaphore(hSemaphoreRTP, 1, NULL);
 }
}
//------------------------------------------------------------------------------
DWORD WINAPI rtp_decode_Procedure(CONST LPVOID lpParam)
{
 port = param.Option.VideoPort;

 pAVPacket = av_packet_alloc();
 if (!pAVPacket)
 {
 MessageBox(NULL, L"ERROR Could not allocate pAVPacket", L"Init decoder error", MB_OK | MB_ICONERROR);
 exit(1);
 }
 av_init_packet(pAVPacket);

 /* find the MPEG-1 video decoder */
 pAVCodec = avcodec_find_decoder(AV_CODEC_ID_H264);
 if (!pAVCodec)
 {
 MessageBox(NULL, L"ERROR Codec not found", L"Init decoder error", MB_OK | MB_ICONERROR);
 exit(1);
 }

 pAVCodecParserContext = av_parser_init(pAVCodec->id);
 if (!pAVCodecParserContext)
 {
 MessageBox(NULL, L"ERROR Parser not found", L"Init decoder error", MB_OK | MB_ICONERROR);
 exit(1);
 }

 pAVCodecContext = avcodec_alloc_context3(pAVCodec);
 if (!pAVCodecContext) 
 {
 MessageBox(NULL, L"ERROR Could not allocate video codec context", L"Init decoder error", MB_OK | MB_ICONERROR);
 exit(1);
 }
 
 if (avcodec_open2(pAVCodecContext, pAVCodec, NULL) < 0)
 {
 MessageBox(NULL, L"ERROR Could not open codec", L"Init decoder error", MB_OK | MB_ICONERROR);
 exit(1);
 }

 pAVFrame = av_frame_alloc();
 if (!pAVFrame) 
 {
 MessageBox(NULL, L"ERROR Could not allocate video frame", L"Init decoder error", MB_OK | MB_ICONERROR);
 exit(1);
 } 
 
 while (FlagRTPActive)
 {
 if(port != param.Option.VideoPort)
 closesocket(RTPSocket);

 WaitForSingleObject(hSemaphoreRTP, 500); 
 Decode_NaluToAVFrameRGG();
 }

 avformat_free_context(pAVFormatContext);
 av_frame_free(&pAVFrame);
 avcodec_close(pAVCodecContext);
 av_packet_free(&pAVPacket);

 
 if (hTreadRTPDecode) CloseHandle(hTreadRTPDecode);

 return 0;
}

//------------------------------------------------------------------------------
char RTPSocketInit(void)
{ 
 sockaddr_in RTPSocketAddr;
 
 RTPSocketAddr.sin_family = AF_INET;
 RTPSocketAddr.sin_addr.s_addr = htonl(INADDR_ANY);
 RTPSocketAddr.sin_port = htons(param.Option.VideoPort);

 RTPSocket = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
 if (RTPSocket == INVALID_SOCKET)
 {
 MessageBox(NULL, L"ERROR Invalid RTP Socket", L"UDP Socket", MB_OK | MB_ICONERROR);
 return 0;
 }

 int option = 12000;
 if (setsockopt(RTPSocket, SOL_SOCKET, SO_RCVBUF, (char*)&option, sizeof(option)) < 0)
 {
 printf("setsockopt failed\n");

 }

 /*option = TRUE;
 if (setsockopt(RTPSocket, SOL_SOCKET, SO_CONDITIONAL_ACCEPT, (char*)&option, sizeof(option)) < 0)
 {
 printf("setsockopt failed\n");

 }*/

 if (bind(RTPSocket, (sockaddr*)&RTPSocketAddr, sizeof(RTPSocketAddr)) == SOCKET_ERROR)
 {
 MessageBox(NULL, L"ERROR bind", L"UDP Socket", MB_OK | MB_ICONERROR);
 closesocket(RTPSocket);
 return 0;
 }
 return 1;
}

//------------------------------------------------------------------------------
char RTPSocketRecive(void)
{
 static char RecvBuf[2000];
 static int BufLen = 2000;

 int iResult = 0;
 struct sockaddr_in SenderAddr;
 int SenderAddrSize = sizeof(SenderAddr);
 TCHAR szErrorMsg[100];

 iResult = recvfrom(RTPSocket, RecvBuf, BufLen, 0, NULL, NULL);
 if (iResult == SOCKET_ERROR && FlagRTPActive == TRUE)
 {
 StringCchPrintf(szErrorMsg, 100, L"ERROR recvfrom Ошибка:%d", WSAGetLastError());
 MessageBox(NULL, szErrorMsg, L"UDP Socket", MB_OK | MB_ICONERROR);
 return -1;
 }

 packet.data = (unsigned char*)RecvBuf;
 packet.size = iResult;

 return 1;
}

//------------------------------------------------------------------------------
void RTPPacketParser(void)
{
 RTPHeader_DType RTPHeader;
 RTPData_DType NALU;
 unsigned char* buffer = packet.data;
 int pos = 0;
 static int count = 0;
 static short lastSequenceNumber = 0xFFFF;
 short type;
 char payload_header;

 //read rtp header
 memcpy(&RTPHeader, buffer, sizeof(RTPHeader));
 RTPHeader.sequence_number = BYTE2_SWAP(RTPHeader.sequence_number);
 pos += 12; 
 
 if (RTPHeader.X) {
 //profile extension
 short define;
 short length;
 length = buffer[pos + 3];//suppose not so long extension
 pos += 4;
 pos += (length * 4);
 }

 payload_header = buffer[pos];
 type = payload_header & 0x1f; //Тип полезной нагрузки RTP
 pos++;
 
 //STAP-A
 if (type == 24)
 { 
 while (pos < packet.size)
 {
 unsigned short NALU_size;
 memcpy(&NALU_size, buffer + pos, 2);
 NALU_size = BYTE2_SWAP(NALU_size);
 pos += 2;
 char NAL_header = buffer[pos];
 short NAL_type = NAL_header & 0x1f;

 if (NAL_type == 7) 
 {
 count++;
 //cout<<"SPS, sequence number: "</cout<<"PPS, sequence number: "</cout<<"end of sequence, sequence number: "< 0) 
 {
 NALU.data = (unsigned char*) malloc(NALU_size + 4);
 NALU.size = NALU_size + 4;
 memcpy(NALU.data, &inteval, 4);
 memcpy(NALU.data + 4, &buffer[pos], NALU_size);

 WaitForSingleObject(hMutexRTPRecive, INFINITE);
 q.push(NALU);
 ReleaseMutex(hMutexRTPRecive);
 }

 pos += NALU_size;
 }
 }
 //FU-A Fragmentation unit
 else if (type == 28)
 { 
 //FU header
 char FU_header = buffer[pos];
 bool fStart = FU_header & 0x80;
 bool fEnd = FU_header & 0x40;

 //NAL header
 char NAL_header = (payload_header & 0xe0) | (FU_header & 0x1f);
 short NAL_type = FU_header & 0x1f;
 if (NAL_type == 7) 
 {
 count++;
 //SPS
 }
 else if (NAL_type == 8) 
 {
 //PPS
 }
 else if (NAL_type == 10) 
 {
 //end of sequence
 }
 pos++;

 int size = packet.size - pos;
 
 if (count > 0)
 {
 if (fStart) 
 {
 if (FU_buffer.size != 0)
 {
 free(FU_buffer.data);
 FU_buffer.size = 0;
 }
 FU_buffer.data = (unsigned char*)malloc(size + 5);
 if (FU_buffer.data == NULL)
 return;
 FU_buffer.size = size + 5;
 memcpy(FU_buffer.data, &inteval, 4);
 memcpy(FU_buffer.data + 4, &NAL_header, 1);
 memcpy(FU_buffer.data + 5, buffer + pos, size);
 }
 else
 {
 unsigned char* temp = (unsigned char*)malloc(FU_buffer.size + size);
 memcpy(temp, FU_buffer.data, FU_buffer.size);
 memcpy(temp + FU_buffer.size, buffer + pos, size);
 if (FU_buffer.size != 0) free(FU_buffer.data);
 FU_buffer.data = temp;
 FU_buffer.size += size;
 }

 if (fEnd)
 {
 NALU.data = (unsigned char*)malloc(FU_buffer.size);
 NALU.size = FU_buffer.size;
 memcpy(NALU.data, FU_buffer.data, FU_buffer.size);

 WaitForSingleObject(hMutexRTPRecive, INFINITE);
 q.push(NALU);
 ReleaseMutex(hMutexRTPRecive);

 free(FU_buffer.data);
 FU_buffer.size = 0;
 }
 }
 
 }
 else 
 {
 //other type
 short NAL_type = type;
 if (NAL_type == 7) 
 {
 count++;
 //SPS
 }
 else if (NAL_type == 8) 
 {
 //PPS
 }
 else if (NAL_type == 10) 
 {
 //end of sequence
 }

 int size = packet.size - pos + 1;
 if (count > 0)
 {
 NALU.data = (unsigned char*)malloc(size+4);
 NALU.size = size + 4;
 memcpy(NALU.data, &inteval, 4);
 memcpy(NALU.data + 4, &buffer[12], size);

 WaitForSingleObject(hMutexRTPRecive, INFINITE);
 q.push(NALU);
 ReleaseMutex(hMutexRTPRecive);
 }
 }
}

//------------------------------------------------------------------------------
void Decode_NaluToAVFrameRGG()
{
 unsigned char cntNALU;
 int len = 0;
 static int size = 0;
 unsigned char* data = NULL;
 int ret;
 RTPData_DType NALU;

 av_frame_unref(pAVFrame);
 av_packet_unref(pAVPacket);

 while(1)
 {
 WaitForSingleObject(hMutexRTPRecive, INFINITE);
 if (q.empty())
 {
 ReleaseMutex(hMutexRTPRecive);
 break;
 }
 NALU = q.front();
 q.pop();
 ReleaseMutex(hMutexRTPRecive);

 data = NALU.data;
 size = NALU.size;

 while(size)
 { 
 len = av_parser_parse2(pAVCodecParserContext, pAVCodecContext, &pAVPacket->data, &pAVPacket->size,
 (uint8_t*)data, size,
 AV_NOPTS_VALUE, AV_NOPTS_VALUE, AV_NOPTS_VALUE);

 data = len ? data + len : data;
 size -= len;
 
 if (pAVPacket->size)
 {
 ret = avcodec_send_packet(pAVCodecContext, pAVPacket);

 if(ret == AVERROR_EOF)
 MessageBox(NULL, L"the codec has been fully flushed, and there will be no more output frames", L"avcodec_send_packet", MB_OK | MB_ICONERROR);

 if (ret < 0)
 {
 MessageBox(NULL, L"ERROR sending a packet for decoding", L"Decode", MB_OK | MB_ICONERROR);
 //exit(1);
 }

 while (ret >= 0) 
 {
 ret = avcodec_receive_frame(pAVCodecContext, pAVFrame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 break;
 else if (ret < 0) {
 MessageBox(NULL, L"ERROR during decoding", L"Decode", MB_OK | MB_ICONERROR);
 //exit(1);
 }
 
 AVFrameRGG = av_frame_alloc(); 
 pSwsContext = sws_getContext(pAVCodecContext->width, pAVCodecContext->height, pAVCodecContext->pix_fmt,
 pAVCodecContext->width, pAVCodecContext->height, AV_PIX_FMT_RGB24, SWS_SPLINE,
 0, 0, 0);

 sws_scale_frame(pSwsContext, AVFrameRGG, pAVFrame);
 
 ImgBufferSet(AVFrameRGG->data[0], pAVCodecContext->height, pAVCodecContext->width, AVFrameRGG->linesize[0]);

 sws_freeContext(pSwsContext);
 av_frame_free(&AVFrameRGG);
 }
 }
 }
 free(NALU.data);
 NALU.size = 0;
 } 
}

</int></queue></iostream></chrono>


It's DECIDED