
Recherche avancée
Médias (1)
-
Rennes Emotion Map 2010-11
19 octobre 2011, par
Mis à jour : Juillet 2013
Langue : français
Type : Texte
Autres articles (72)
-
Multilang : améliorer l’interface pour les blocs multilingues
18 février 2011, parMultilang est un plugin supplémentaire qui n’est pas activé par défaut lors de l’initialisation de MediaSPIP.
Après son activation, une préconfiguration est mise en place automatiquement par MediaSPIP init permettant à la nouvelle fonctionnalité d’être automatiquement opérationnelle. Il n’est donc pas obligatoire de passer par une étape de configuration pour cela. -
Qu’est ce qu’un masque de formulaire
13 juin 2013, parUn masque de formulaire consiste en la personnalisation du formulaire de mise en ligne des médias, rubriques, actualités, éditoriaux et liens vers des sites.
Chaque formulaire de publication d’objet peut donc être personnalisé.
Pour accéder à la personnalisation des champs de formulaires, il est nécessaire d’aller dans l’administration de votre MediaSPIP puis de sélectionner "Configuration des masques de formulaires".
Sélectionnez ensuite le formulaire à modifier en cliquant sur sont type d’objet. (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 is the first MediaSPIP stable release.
Its official release date is June 21, 2013 and is announced here.
The zip file provided here only contains the sources of MediaSPIP in its standalone version.
To get a working installation, you must manually install all-software dependencies on the server.
If you want to use this archive for an installation in "farm mode", you will also need to proceed to other manual (...)
Sur d’autres sites (7534)
-
RTMP Broadcast packet body structure for Twitch
22 mai 2018, par DobbyI’m currently working on a project similar to OBS, where I’m capturing screen data, encoding it with the x264 library, and then broadcasting it to a twitch server.
Currently, the servers are accepting the data, but no video is being played - it buffers for a moment, then returns an error code "2000 : network error"
Like OBS Classic, I’m dividing each NAL provided by x264 by its type, and then making changes to each
int frame_size = x264_encoder_encode(encoder, &nals, &num_nals, &pic_in, &pic_out);
//sort the NAL's into their types and make necessary adjustments
int timeOffset = int(pic_out.i_pts - pic_out.i_dts);
timeOffset = htonl(timeOffset);//host to network translation, ensure the bytes are in the right format
BYTE *timeOffsetAddr = ((BYTE*)&timeOffset) + 1;
videoSection sect;
bool foundFrame = false;
uint8_t * spsPayload = NULL;
int spsSize = 0;
for (int i = 0; i/std::cout << "VideoEncoder: EncodedImages Size: " << encodedImages->size() << std::endl;
x264_nal_t &nal = nals[i];
//std::cout << "NAL is:" << nal.i_type << std::endl;
//need to account for pps/sps, seems to always be the first frame sent
if (nal.i_type == NAL_SPS) {
spsSize = nal.i_payload;
spsPayload = (uint8_t*)malloc(spsSize);
memcpy(spsPayload, nal.p_payload, spsSize);
} else if (nal.i_type == NAL_PPS){
//pps always happens after sps
if (spsPayload == NULL) {
std::cout << "VideoEncoder: critical error, sps not set" << std::endl;
}
uint8_t * payload = (uint8_t*)malloc(nal.i_payload + spsSize);
memcpy(payload, spsPayload, spsSize);
memcpy(payload, nal.p_payload + spsSize, nal.i_payload);
sect = { nal.i_payload + spsSize, payload, nal.i_type };
encodedImages->push(sect);
} else if (nal.i_type == NAL_SEI || nal.i_type == NAL_FILLER) {
//these need some bytes at the start removed
BYTE *skip = nal.p_payload;
while (*(skip++) != 0x1);
int skipBytes = (int)(skip - nal.p_payload);
int newPayloadSize = (nal.i_payload - skipBytes);
uint8_t * payload = (uint8_t*)malloc(newPayloadSize);
memcpy(payload, nal.p_payload + skipBytes, newPayloadSize);
sect = { newPayloadSize, payload, nal.i_type };
encodedImages->push(sect);
} else if (nal.i_type == NAL_SLICE_IDR || nal.i_type == NAL_SLICE) {
//these packets need an additional section at the start
BYTE *skip = nal.p_payload;
while (*(skip++) != 0x1);
int skipBytes = (int)(skip - nal.p_payload);
std::vector<byte> bodyData;
if (!foundFrame) {
if (nal.i_type == NAL_SLICE_IDR) { bodyData.push_back(0x17); } else { bodyData.push_back(0x27); } //add a 17 or a 27 as appropriate
bodyData.push_back(1);
bodyData.push_back(*timeOffsetAddr);
foundFrame = true;
}
//put into the payload the bodyData followed by the nal payload
uint8_t * bodyDataPayload = (uint8_t*)malloc(bodyData.size());
memcpy(bodyDataPayload, bodyData.data(), bodyData.size() * sizeof(BYTE));
int newPayloadSize = (nal.i_payload - skipBytes);
uint8_t * payload = (uint8_t*)malloc(newPayloadSize + sizeof(bodyDataPayload));
memcpy(payload, bodyDataPayload, sizeof(bodyDataPayload));
memcpy(payload + sizeof(bodyDataPayload), nal.p_payload + skipBytes, newPayloadSize);
int totalSize = newPayloadSize + sizeof(bodyDataPayload);
sect = { totalSize, payload, nal.i_type };
encodedImages->push(sect);
} else {
std::cout << "VideoEncoder: Nal type did not match expected" << std::endl;
continue;
}
}
</byte>The NAL payload data is then put into a struct, VideoSection, in a queue buffer
//used to transfer encoded data
struct videoSection {
int frameSize;
uint8_t* payload;
int type;
};After which it is picked up by the broadcaster, a few more changes are made, and then I call rtmp_send()
videoSection sect = encodedImages->front();
encodedImages->pop();
//std::cout << "Broadcaster: Frame Size: " << sect.frameSize << std::endl;
//two methods of sending RTMP data, _sendpacket and _write. Using sendpacket for greater control
RTMPPacket * packet;
unsigned char* buf = (unsigned char*)sect.payload;
int type = buf[0]&0x1f; //I believe &0x1f sets a 32bit limit
int len = sect.frameSize;
long timeOffset = GetTickCount() - rtmp_start_time;
//assign space packet will need
packet = (RTMPPacket *)malloc(sizeof(RTMPPacket)+RTMP_MAX_HEADER_SIZE + len + 9);
memset(packet, 0, sizeof(RTMPPacket) + RTMP_MAX_HEADER_SIZE);
packet->m_body = (char *)packet + sizeof(RTMPPacket) + RTMP_MAX_HEADER_SIZE;
packet->m_nBodySize = len + 9;
//std::cout << "Broadcaster: Packet Size: " << sizeof(RTMPPacket) + RTMP_MAX_HEADER_SIZE + len + 9 << std::endl;
//std::cout << "Broadcaster: Packet Body Size: " << len + 9 << std::endl;
//set body to point to the packetbody
unsigned char *body = (unsigned char *)packet->m_body;
memset(body, 0, len + 9);
//NAL_SLICE_IDR represents keyframe
//first element determines packet type
body[0] = 0x27;//inter-frame h.264
if (sect.type == NAL_SLICE_IDR) {
body[0] = 0x17; //h.264 codec id
}
//-------------------------------------------------------------------------------
//this section taken from https://stackoverflow.com/questions/25031759/using-x264-and-librtmp-to-send-live-camera-frame-but-the-flash-cant-show
//in an effort to understand packet format. it does not resolve my previous issues formatting the data for twitch to play it
//sets body to be NAL unit
body[1] = 0x01;
body[2] = 0x00;
body[3] = 0x00;
body[4] = 0x00;
//>> is a shift right
//shift len to the right, and AND it
/*body[5] = (len >> 24) & 0xff;
body[6] = (len >> 16) & 0xff;
body[7] = (len >> 8) & 0xff;
body[8] = (len) & 0xff;*/
//end code sourced from https://stackoverflow.com/questions/25031759/using-x264-and-librtmp-to-send-live-camera-frame-but-the-flash-cant-show
//-------------------------------------------------------------------------------
//copy from buffer into rest of body
memcpy(&body[9], buf, len);
//DEBUG
//save individual packet body to a file with name rtmp[packetnum]
//determine why some packets do not have 0x27 or 0x17 at the start
//still happening, makes no sense given the above code
/*std::string fileLocation = "rtmp" + std::to_string(packCount++);
std::cout << fileLocation << std::endl;
const char * charConversion = fileLocation.c_str();
FILE* saveFile = NULL;
saveFile = fopen(charConversion, "w+b");//open as write and binary
if (!fwrite(body, len + 9, 1, saveFile)) {
std::cout << "VideoEncoder: Error while trying to write to file" << std::endl;
}
fclose(saveFile);*/
//END DEBUG
//other packet details
packet->m_hasAbsTimestamp = 0;
packet->m_packetType = RTMP_PACKET_TYPE_VIDEO;
if (rtmp != NULL) {
packet->m_nInfoField2 = rtmp->m_stream_id;
}
packet->m_nChannel = 0x04;
packet->m_headerType = RTMP_PACKET_SIZE_LARGE;
packet->m_nTimeStamp = timeOffset;
//send the packet
if (rtmp != NULL) {
RTMP_SendPacket(rtmp, packet, TRUE);
}I can see that Twitch is receiving the data in the inspector, at a steady 3kbps. so I’m sure something is wrong with how I’m adjusting the data before sending it. Can anyone advise me on what I’m doing wrong here ?
-
How to Skip frames while decoding H264 stream ?
16 septembre 2018, par TTGroupI’m using FFMPEG to decode H264 (or H265) RTSP Stream.
My system have 2 software : Server and Client
Server: Read frames from RTSP stream --> Forward frames to Client
Client: Receive frames from Server --> Decode --> RenderI have implemented and it worked ok, but there is a case make my system work not good. That is when internet from Server - Client is slow, frames can not transfer real-time to Client.
In present, I deal with this issue by Skip some frames (not send to Client) when the Queue is reached limit of count. The following is my summary code
//At Server Software (include 2 threads A and B)
//Thread A: Read AVPacket and forward to Client
while(true)
{
AVPacket packet;
av_init_packet(&packet);
packet.size = 0;
packet.data = NULL;
int ret = AVERROR(EAGAIN);
while (AVERROR(EAGAIN) == ret)
ret = av_read_frame(pFormatCtx, &packet);
if(packet.size > 0)
{
if(mySendQueue.count < 120) //limit 120 packet in queue
mySendQueue.Enqueue(packet); ////Thread B will read from this queue, to send packets to Client via TCP socket
else
;//SkipThisFrame ***: No send
}
}
//Thread B: Send To Client via TCP Socket
While(true)
{
AVPacket packet;
if(mySendQueue.Dequeue(packet))
{
SendPacketToClient(packet);
}
}
//At Server Software : Receive AVPacket from Server --> Decode --> Render
While(true)
{
AVPacket packet;
AVFrame frame;
ReadPacketFromServer(packet);
if (av_decode_asyn(pCodecCtx, &frame, &frameFinished, &packet) == RS_OK)
{
if (frameFinished)
{
RenderFrame(frame);
}
}
}
UINT32 __clrcall av_decode_asyn(AVCodecContext *pCodecCtx, AVFrame *frame, int *frameFinished, AVPacket *packet)
{
int ret = -1;
*frameFinished = 0;
if (packet)
{
ret = avcodec_send_packet(pCodecCtx, packet);
// In particular, we don't expect AVERROR(EAGAIN), because we read all
// decoded frames with avcodec_receive_frame() until done.
if (ret < 0 && ret != AVERROR_EOF)
return RS_NOT_OK;
}
ret = avcodec_receive_frame(pCodecCtx, frame);
if (ret < 0 && ret != AVERROR(EAGAIN))
{
return RS_NOT_OK;
}
if (ret >= 0)
*frameFinished = 1;
return RS_OK;
}My question is focus in line of code
SkipThisFrame ***
, this algorithm skip frame continuously, so it maybe make the decoder on Client occur unexpectedly error or Crash ?And when skip frame like that, make Client Render frames is not normally ?
And someone call show me the proper algorithm to skip frames in my case ?
Thank you very much !
-
Web camera Logitech and Linux
8 novembre 2019, par Nick SawI have Logitech C310 camera with the declared characteristics of 720p 30fps.
If you connect the camera to windows, the recording is fully consistent with the stated 720p 30fps - the picture is clear.
The challenge is to connect the same camera to OrangePI (server Armbian) and to save video files on it.
The camera appears as /dev/video0.
sudo ffmpeg -f v4l2 -s 1280x720 -i /dev/video0 output.wmv
As a result, I get a crumbly picture with a frequency of 5 fps.
Maybe I’m using ffmpeg incorrectly ? Please help me who has experience with Web cameras on Linux ...
Thanks in advance.USB-camera configuration :
v4l2-ctl --all --device=/dev/video0
Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : UVC Camera (046d:081b)
Bus info : usb-1c1c000.usb-1
Driver version: 4.14.18
Capabilities : 0x84200001
Video Capture
Streaming
Extended Pix Format
Device Capabilities
Device Caps : 0x04200001
Video Capture
Streaming
Extended Pix Format
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
Width/Height : 1280/720
Pixel Format : 'YUYV'
Field : None
Bytes per Line : 2560
Size Image : 1843200
Colorspace : sRGB
Transfer Function : Default
YCbCr/HSV Encoding: Default
Quantization : Default
Flags :
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 1280, Height 720
Default : Left 0, Top 0, Width 1280, Height 720
Pixel Aspect: 1/1
Selection: crop_default, Left 0, Top 0, Width 1280, Height 720
Selection: crop_bounds, Left 0, Top 0, Width 1280, Height 720
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 5.000 (5/1)
Read buffers : 0
brightness (int) : min=0 max=255 step=1 default=128 value=128
contrast (int) : min=0 max=255 step=1 default=32 value=32
saturation (int) : min=0 max=255 step=1 default=32 value=32
white_balance_temperature_auto (bool) : default=1 value=1
gain (int) : min=0 max=255 step=1 default=64 value=192
power_line_frequency (menu) : min=0 max=2 default=2 value=2
white_balance_temperature (int) : min=0 max=10000 step=10 default=4000 value=4610 flags=inactive
sharpness (int) : min=0 max=255 step=1 default=24 value=24
backlight_compensation (int) : min=0 max=1 step=1 default=0 value=0
exposure_auto (menu) : min=0 max=3 default=3 value=3
exposure_absolute (int) : min=1 max=10000 step=1 default=166 value=249 flags=inactive
exposure_auto_priority (bool) : default=0 value=1
led1_mode (menu) : min=0 max=3 default=3 value=3
led1_frequency (int) : min=0 max=131 step=1 default=0 value=0