
Recherche avancée
Autres articles (111)
-
Creating farms of unique websites
13 avril 2011, parMediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...) -
Personnaliser les catégories
21 juin 2013, parFormulaire de création d’une catégorie
Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
On peut modifier ce formulaire dans la partie :
Administration > Configuration des masques de formulaire.
Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...) -
Contribute to a better visual interface
13 avril 2011MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.
Sur d’autres sites (9850)
-
How to display raw H264 stream in C#
10 août 2021, par forDreamI read the raw H264 stream via socket, how should I display it with as low latency as possible ? For example, display it in PictureBox.


I tried to solve this problem with
FFmpeg.AutoGen
, but I'm new forffmpeg
. Or is there another easy way for me to display the raw H264 stream.

I also tried
https://github.com/ZeBobo5/Vlc.DotNet
, but the VLC must have an uncancelable buffer that causes a delay of about 1 to 2 seconds in the display of the stream.

public static unsafe void test(Socket mSocket)
 {
 ffmpeg.av_register_all();

 var pCodec = ffmpeg.avcodec_find_decoder(AVCodecID.AV_CODEC_ID_H264);
 
 var pCodecContext = ffmpeg.avcodec_alloc_context3(pCodec);

 var code = ffmpeg.avcodec_open2(pCodecContext, pCodec, null).ThrowExceptionIfError("avcodec_open2");
 
 var pFrame = ffmpeg.av_frame_alloc();
 var pReceivedFrame = ffmpeg.av_frame_alloc();
 AVPacket* pPacket = ffmpeg.av_packet_alloc();
 var buffer = new byte[4096];

 while (mSocket.Connected)
 {
 ffmpeg.av_frame_unref(pFrame);
 ffmpeg.av_frame_unref(pReceivedFrame);
 ffmpeg.av_packet_unref(pPacket);
 var len = mSocket.Receive(buffer, buffer.Length, SocketFlags.None);
 fixed (byte* s = buffer)
 {
 pPacket->size = len;
 pPacket->data = s;
 }
 if (len > 0)
 {
 ffmpeg.avcodec_send_packet(pCodecContext, pPacket).ThrowExceptionIfError("avcodec_send_packet"); // <-- throws an exception: Invalid data found when processing input
 ffmpeg.av_packet_unref(pPacket);

 ffmpeg.avcodec_receive_frame(pCodecContext, pFrame).ThrowExceptionIfError("avcodec_receive_frame");

 // some code to display the image to picture box

 Trace.WriteLine($"width:{pFrame->width},height:{pFrame->height},{pFrame->data}");

 }

 }
 ffmpeg.av_frame_free(&pFrame);
 }



-
FFMPEG : unable to open display. No such file or directory on ubuntu
3 juin 2021, par Muaaz KhalidI'm trying to create a video by merging a couple of images together. This is the command I'm trying with.


Note : the command works fine on another server. and I'm just migrating the software to the new server and here it doesn't work as expected


ffmpeg -loop 1 -t 5.0000 -r 30 -i /tmp/video/img_6726.png -loop 1 -t 5.0000 -r 30 -i /tmp/video/img_6725.png -loop 1 -t 5.0000 -r 30 -i /tmp/video/img_6724.png -loop 1 -t 5.0000 -r 30 -i /tmp/video/img_6723.png -i /tmp/video/1610806386.jpeg -filter_complex "[0:v]trim=start=0:duration=5.0000,setpts=PTS-STARTPTS[v0_t];[1:v]fade=t=in:st=0:d=1[v1]; [2:v]fade=t=in:st=0:d=1[v2]; [3:v]fade=t=in:st=0:d=1[v3]; [v0_t][v1][v2][v3] concat=n=4:v=1:a=0,format=yuv420p[v];[v][4:v]overlay=x=35:y=((main_h-overlay_h)-35)[out_v];amerge=inputs=0 [out_m_tts]" -map "[out_v]" -map "[out_m_tts]" -ac 2 /tmp/video/video.mp4



and as mentioned in the title, I get the following error


Error: unable to open display 
ffmpeg version n4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
 built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
 configuration: --prefix= --prefix=/usr --disable-debug --disable-doc --disable-static --enable-cuda --enable-cuda-sdk --enable-cuvid --enable-libdrm --enable-ffplay --enable-gnutls --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libnpp --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopus --enable-libpulse --enable-sdl2 --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxvid --enable-nonfree --enable-nvenc --enable-omx --enable-openal --enable-opencl --enable-runtime-cpudetect --enable-shared --enable-vaapi --enable-vdpau --enable-version3 --enable-xlib
 libavutil 56. 51.100 / 56. 51.100
 libavcodec 58. 91.100 / 58. 91.100
 libavformat 58. 45.100 / 58. 45.100
 libavdevice 58. 10.100 / 58. 10.100
 libavfilter 7. 85.100 / 7. 85.100
 libswscale 5. 7.100 / 5. 7.100
 libswresample 3. 7.100 / 3. 7.100
 libpostproc 55. 7.100 / 55. 7.100
/tmp/video/img_6726.png: No such file or directory



I'm running the above command as a root user and this file exists with 777 permission
/tmp/video/img_6726.png




Versions


Ubuntu : 18.04.2 LTS


FFMPEG : n4.3.1


FFMPEG was installed via following command
snap install ffmpeg


-
Live555 Multicast RTSP Server - Display in FFMPEG
31 mai 2021, par BanBI made a multicast RTSP server using the Live555 library.

It is displayed by VLC but not from VMS (NX Witness) or FFMPEG.

If I use the Unicast, I can display it, but I do not know how to change the file that is streaming in the Unicast.

Is there a way to change the streaming file in Unicast or make it viewable in FFMPEG with Multicast ?

This is my code.




Header




#include <iostream>
#include <string>
#include <filesystem>
#include <fstream>
#include <ctime>
#include 
#include 
#include 
#include <opencv2></opencv2>videoio.hpp>
#include <opencv2></opencv2>imgcodecs.hpp>

void Start(const char* streamPath, int port);
void AddDataL(BYTE* data, int length);
AddDataHW(BYTE* data, int height, int width);

void Play();
void AfterPlaying(void*);

void EncodeToH264(cv::Mat image);
long long GetTimeMs();
int GetFileCount();
const char* GetFirstFilePath();

const int fourcc = cv::VideoWriter::fourcc('x', '2', '6', '4');

UsageEnvironment* env;
H264ServerMediaSubsession* h264;
H264VideoStreamFramer* videoSource;
RTPSink* videoSink;

const char* dataPath = "data/";
const char* extension = ".h264";
</ctime></fstream></filesystem></string></iostream>




CPP




void Start()
{
 askScheduler *scheduler = BasicTaskScheduler::createNew();
 env = BasicUsageEnvironment::createNew(*scheduler);

 RTSPServer *rtspServer = RTSPServer::createNew(*env, 8554);

 if (rtspServer == NULL)
 throw;

 struct sockaddr_storage destinationAddress;
 destinationAddress.ss_family = AF_INET;
 ((struct sockaddr_in &)destinationAddress).sin_addr.s_addr = chooseRandomIPv4SSMAddress(*env);

 const unsigned short rtpPortNum = 18888;
 const unsigned short rtcpPortNum = rtpPortNum + 1;
 const unsigned char ttl = 255;

 const Port rtpPort(rtpPortNum);
 const Port rtcpPort(rtcpPortNum);

 Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);
 rtpGroupsock.multicastSendOnly();
 Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);
 rtcpGroupsock.multicastSendOnly();

 OutPacketBuffer::maxSize = 300000;
 videoSink = H264VideoRTPSink::createNew(*env, &rtpGroupsock, 96);

 const unsigned estimatedSessionBandwidth = 5000;
 const unsigned maxCNAMElen = 100;
 unsigned char CNAME[maxCNAMElen + 1];
 gethostname((char *)CNAME, maxCNAMElen);
 CNAME[maxCNAMElen] = '\0';

 RTCPInstance *rtcp = RTCPInstance::createNew(*env, &rtcpGroupsock, estimatedSessionBandwidth, CNAME, videoSink, NULL, True);

 ServerMediaSession *sms = ServerMediaSession::createNew(*env, "live", "live", "Session streamed by \"Test Server\"", True);
 sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));
 rtspServer->addServerMediaSession(sms);

 *env << "Start > " << rtspServer->rtspURL(sms) << "\n";

 Play();

 env->taskScheduler().doEventLoop();
}

void AddData(BYTE* data, int length)
{
 EncodeToH264(cv::imdecode(cv::Mat(1, length, CV_8UC3, data), 1));
}

void EncodeToH264(cv::Mat image)
{
 std::string fileName = dataPath + std::to_string(GetTimeMs()) + extension;
 cv::VideoWriter writer(fileName.c_str(), fourcc, 1, image.size());
 writer.write(image);
 writer.write(image);
 writer.release();
}

void Play()
{
 while (GetFileCount() == 0);

 const char* fileName = GetFirstFilePath();

 ByteStreamFileSource* fileSource;
 while ((fileSource = ByteStreamFileSource::createNew(*env, fileName)) == NULL);

 FramedSource* videoES = fileSource;

 videoSource = H264VideoStreamFramer::createNew(*env, videoES);

 videoSink->startPlaying(*videoSource, AfterPlaying, videoSink);
}

void AfterPlaying(void*)
{
 videoSink->stopPlaying();
 Medium::close(videoSource);

 if (GetFileCount() > 1)
 std::filesystem::remove(GetFirstFilePath());

 Play();
}

long long GetTimeMs()
{
 return std::chrono::duration_cast(std::chrono::system_clock::now().time_since_epoch()).count();
}

int GetFileCount() 
{
 return std::distance(std::filesystem::directory_iterator(dataPath), std::filesystem::directory_iterator());
}

const char* GetFirstFilePath()
{
 for (const auto& entry : std::filesystem::directory_iterator(dataPath))
 return entry.path().string().c_str();
}