
Recherche avancée
Médias (1)
-
The Slip - Artworks
26 septembre 2011, par
Mis à jour : Septembre 2011
Langue : English
Type : Texte
Autres articles (62)
-
Configurer la prise en compte des langues
15 novembre 2010, parAccéder à la configuration et ajouter des langues prises en compte
Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...) -
Demande de création d’un canal
12 mars 2010, parEn fonction de la configuration de la plateforme, l’utilisateur peu avoir à sa disposition deux méthodes différentes de demande de création de canal. La première est au moment de son inscription, la seconde, après son inscription en remplissant un formulaire de demande.
Les deux manières demandent les mêmes choses fonctionnent à peu près de la même manière, le futur utilisateur doit remplir une série de champ de formulaire permettant tout d’abord aux administrateurs d’avoir des informations quant à (...) -
La file d’attente de SPIPmotion
28 novembre 2010, parUne file d’attente stockée dans la base de donnée
Lors de son installation, SPIPmotion crée une nouvelle table dans la base de donnée intitulée spip_spipmotion_attentes.
Cette nouvelle table est constituée des champs suivants : id_spipmotion_attente, l’identifiant numérique unique de la tâche à traiter ; id_document, l’identifiant numérique du document original à encoder ; id_objet l’identifiant unique de l’objet auquel le document encodé devra être attaché automatiquement ; objet, le type d’objet auquel (...)
Sur d’autres sites (6456)
-
How to detect the real orientation of a video recorded by mobile with "auto rotate" disabled
28 octobre 2022, par FlamingMoeWhen you record a video, the rotation metadata has 4 possible values (0, 90, 180 and 270) and it stores how the device was held when starting the recording.


But mobile phones has also a feature to enable or disable "auto rotation" of screen.


It's very typical, nowadays, to have that feature disable, and many people record videos with the phone horizontally, but since the "auto rotation" is disable, the metadata stores like the video was recorded as vertically.


How to handle this ?


-
How to send written frames in real time/synchronized with FFmpeg and UDP ?
20 juin 2018, par potu1304I wanted to nearly live stream my Unit game with FFmpeg to a simple client. I have one Unity game in which each frame is saved as an jpg image. These images are wrapped in ffmpeg and send over udp to a simple c# client where I use ffplay to play the stream. The problem is, that FFmpeg is wrapping the images way faster than the unity app can write them. So ffmpeg quits but Unity is still writing frames. Is there a way to set ffmpeg in a loop to wait for the next image or can I somehow make a for loop without call every time ffmpeg ?
Here is my function from my capturing script in Unity :
Process process;
//BinaryWriter _stdin;
public void encodeFrame()
{
ProcessStartInfo startInfo = new ProcessStartInfo();
var basePath = Application.streamingAssetsPath + "/FFmpegOut/Windows/ffmpeg.exe";
info.Arguments = "-re -i screen_%d.jpg -vcodec libx264 -r 24 -f mpegts udp://127.0.0.1:1100";
info.RedirectStandardOutput = true;
info.RedirectStandardInput = true;
info.RedirectStandardError = true;
info.CreateNoWindow = true;
info.UseShellExecute = false;
info.RedirectStandardError = true;
UnityEngine.Debug.Log(string.Format(
"Executing \"{0}\" with arguments \"{1}\".\r\n",
info.FileName,
info.Arguments));
process = Process.Start(info);
//_stdin = new BinaryWriter(process.StandardInput.BaseStream);
process.WaitForExit();
var outputReader = process.StandardError;
string Error = outputReader.ReadToEnd();
UnityEngine.Debug.Log(Error);
}And here the function from my cs file from my simple windowsform application :
private void xxxFFplay()
{
text = "start";
byte[] send_buffer = Encoding.ASCII.GetBytes(text);
sock.SendTo(send_buffer, endPoint);
ffplay.StartInfo.FileName = "ffplay.exe";
ffplay.StartInfo.Arguments = "udp://127.0.0.1:1100";
ffplay.StartInfo.CreateNoWindow = true;
ffplay.StartInfo.RedirectStandardOutput = true;
ffplay.StartInfo.UseShellExecute = false;
ffplay.EnableRaisingEvents = true;
ffplay.OutputDataReceived += (o, e) => Debug.WriteLine(e.Data ?? "NULL", "ffplay");
ffplay.ErrorDataReceived += (o, e) => Debug.WriteLine(e.Data ?? "NULL", "ffplay");
ffplay.Exited += (o, e) => Debug.WriteLine("Exited", "ffplay");
ffplay.Start();
Thread.Sleep(500); // you need to wait/check the process started, then...
// child, new parent
// make 'this' the parent of ffmpeg (presuming you are in scope of a Form or Control)
//SetParent(ffplay.MainWindowHandle, this.panel1.Handle);
// window, x, y, width, height, repaint
// move the ffplayer window to the top-left corner and set the size to 320x280
//MoveWindow(ffplay.MainWindowHandle, -5, -300, 320, 280, true);
}Does have somebody some ideas ? I am really stuck at this to create a somehow "live" stream.
Best regards
-
Display real time frames from several RTSP streams
13 février 2024, par MraxI have this class, it uses ffmpeg library for rtsp live streaming :


#include <iostream>
#include <string>
#include <vector>
#include <mutex>

extern "C"
{
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavformat></libavformat>avio.h>
}

class ryMediaSource
{
public:
 ryMediaSource() {}
 ryMediaSource(const ryMediaSource& other);
 ~ryMediaSource();

 bool ryOpenMediaSource(const std::string&);

private:
 mediaSource pMediaSource;
 AVFormatContext* pFormatCtx;
 mutable std::mutex pMutex;
};
</mutex></vector></string></iostream>


And inside my main file, I have these vector of ryMediaSource and four rstp urls :


std::vector<rymediasource> mediaSources;
std::vector streams =
{
 {"rtsp://1
 {"rtsp://2
 {"rtsp://3
 {"rtsp://4
};
</rymediasource>


Creating a instance for every vector :


for (const auto& stream : streams)
{
 mediaSources.emplace_back(); // Create a new instance for each stream
}



And opening all the streams (I need to have access to all the streams, all the time).


for (size_t s = 0; s < streams.size(); s++)
{
 mediaSources[s].ryOpenMediaSource(streams[s]);
}



After all the streams are loaded, I start to display the videos all of the streams : av_read_frame(pFormatCtx, pPacket).
But I am having a gap from what is been displayed to what is really capturing from the source (IP Cameras).
From ryOpenMediaSource(streams[0]) is about 11 seconds, ryOpenMediaSource(streams[1]) about 7 seconds, ryOpenMediaSource(streams[2]) is about 4 seconds and ryOpenMediaSource(streams[3]) is real time.
I realized that the issue is on my ryOpenMediaSource code :


bool ryMediaSource::ryOpenMediaSource(const std::string& url)
{
 int rc = -1;

 pFormatCtx = avformat_alloc_context();
 if (!pFormatCtx)
 throw std::runtime_error("Failed to allocate AVFormatContext.");
 rc = avformat_open_input(&pFormatCtx, url.c_str(), NULL, NULL);
 if (rc < 0)
 {
 return false;
 }
}



My question is, why this is happening ? Why can't all streams have the same (time stamp ?) , as the last inserted in my vector of ryMediaSource ?


Should I overwrite some variable of pFormatCtx to "force" the all vector to have the (time stamp ?) as the last one ? If so, can you give me some guidance ?


Tried setting some different values on pFormatCtx after loaded with avformat_open_input(&pFormatCtx, url.c_str(), NULL, &pDicts) ; but no luck at all.


I am expecting that all streams started at the same time, even if pre loading them, for later on, transform these frames into a cv::Mat for rendering.


MRE :


Header :

#pragma once

#include <iostream>
#include <string>
#include <vector>
#include <chrono>
#include <thread>
#include <mutex>


extern "C"
{
#include <libavcodec></libavcodec>avcodec.h>
#include <libavformat></libavformat>avformat.h>
#include <libavutil></libavutil>pixdesc.h>
#include <libavutil></libavutil>hwcontext.h>
#include <libavutil></libavutil>opt.h>
#include <libavutil></libavutil>avassert.h>
#include <libavutil></libavutil>imgutils.h>
#include <libswscale></libswscale>swscale.h>
#include <libavdevice></libavdevice>avdevice.h>
#include <libavformat></libavformat>avio.h>
#include <libavutil></libavutil>time.h>
}

class ryMediaSource
{
public:
 ryMediaSource() {}
 ryMediaSource(const ryMediaSource& other);
 ~ryMediaSource();

 struct mediaSourceParams
 {
 int sx;
 int sy;
 int lsize;
 double fps;
 unsigned char* frame;
 };

 bool ryOpenMediaSource(const std::string&);
 mediaSourceParams ryGetMediaSourceFrame();
 void ryCloseMediaSource();

private:
 mediaSource pMediaSource;
 AVFormatContext* pFormatCtx;
 AVCodecContext* pCodecCtx;
 AVFrame* pFrame;
 SwsContext* pSwsCtx;
 AVPacket* pPacket;
 int pVideoStream;
 uint8_t* pBuffer;
 AVFrame* pPict;
 double pFPS;
 mutable std::mutex pMutex;
};

C++ source code :

#include "ryMediaSource.hpp"

ryMediaSource::ryMediaSource(const ryMediaSource& other)
:pFormatCtx(nullptr), 
pCodecCtx(nullptr), 
pFrame(nullptr), 
pSwsCtx(nullptr), 
pPacket(nullptr), 
pBuffer(nullptr), 
pPict(nullptr)
{
 std::lock_guard lock(other.pMutex);
 av_log_set_level(0);
 avformat_network_init();
}

bool ryMediaSource::ryOpenMediaSource(const std::string& url)
{
 int rc = -1;

 try
 {
 AVDictionary* pDicts = nullptr;

 pFormatCtx = avformat_alloc_context();
 if (!pFormatCtx)
 throw std::runtime_error("Failed to allocate AVFormatContext.");
 rc = av_dict_set(&pDicts, "rtsp_transport", "tcp", 0);
 if (rc < 0)
 throw std::runtime_error("av_dict_set failed.");
 rc = avformat_open_input(&pFormatCtx, url.c_str(), NULL, &pDicts);
 if (rc < 0)
 {
 av_dict_free(&pDicts); // Free the dictionary in case of an error
 throw std::runtime_error("Could not open source.");
 }
 }
 catch (const std::exception& e)
 {
 std::cerr << "Exception: " << e.what() << std::endl;
 return false;
 }

 try
 {
 rc = avformat_find_stream_info(pFormatCtx, NULL);
 if (rc < 0)
 {
 throw std::runtime_error("Could not find stream information.");
 }
 pVideoStream = -1;
 for (size_t v = 0; v < pFormatCtx->nb_streams; ++v)
 {
 if (pFormatCtx->streams[v]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO)
 {
 pVideoStream = static_cast<int>(v);
 AVRational rational = pFormatCtx->streams[pVideoStream]->avg_frame_rate;
 pFPS = 1.0 / ((double)rational.num / (double)(rational.den));
 break;
 }
 }
 if (pVideoStream < 0)
 {
 throw std::runtime_error("Could not find video stream.");
 }

 const AVCodec* pCodec = avcodec_find_decoder(pFormatCtx->streams[pVideoStream]->codecpar->codec_id);
 if (!pCodec)
 {
 throw std::runtime_error("Unsupported codec!");
 }
 pCodecCtx = avcodec_alloc_context3(pCodec);
 if (!pCodecCtx)
 {
 throw std::runtime_error("Failed to allocate AVCodecContext.");
 }
 rc = avcodec_parameters_to_context(pCodecCtx, pFormatCtx->streams[pVideoStream]->codecpar);
 if (rc != 0)
 {
 throw std::runtime_error("Could not copy codec context.");
 }
 rc = avcodec_open2(pCodecCtx, pCodec, NULL);
 if (rc < 0)
 {
 throw std::runtime_error("Could not open codec.");
 }
 pFrame = av_frame_alloc();
 if (!pFrame)
 {
 throw std::runtime_error("Could not allocate frame.");
 }
 pSwsCtx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, AV_PIX_FMT_BGR24, SWS_BILINEAR, NULL, NULL, NULL);
 if (!pSwsCtx)
 {
 throw std::runtime_error("Failed to allocate SwsContext.");
 }
 pPacket = av_packet_alloc();
 if (!pPacket)
 {
 throw std::runtime_error("Could not allocate AVPacket.");
 }
 pBuffer = (uint8_t*)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_BGR24, pCodecCtx->width, pCodecCtx->height, 1));
 if (!pBuffer)
 {
 throw std::runtime_error("Could not allocate buffer.");
 }
 pPict = av_frame_alloc();
 if (!pPict)
 {
 throw std::runtime_error("Could not allocate frame.");
 }
 av_image_fill_arrays(pPict->data, pPict->linesize, pBuffer, AV_PIX_FMT_BGR24, pCodecCtx->width, pCodecCtx->height, 1);
 }
 catch (const std::exception& e)
 {
 std::cerr << "Exception: " << e.what() << std::endl;
 return false;
 }

 return true;
}

ryMediaSource::mediaSourceParams ryMediaSource::ryGetMediaSourceFrame()
{
 mediaSourceParams msp = { 0, 0, 0, 0.0, nullptr };
 char errbuf[AV_ERROR_MAX_STRING_SIZE];

 std::lock_guard lock(pMutex);
 if (av_read_frame(pFormatCtx, pPacket) >= 0)
 {
 if (pPacket->stream_index == pVideoStream)
 {
 int ret = avcodec_send_packet(pCodecCtx, pPacket);
 if (ret < 0)
 {
 av_strerror(ret, errbuf, sizeof(errbuf));
 std::cerr << "Error sending packet for avcodec_send_packet: " << errbuf << std::endl;

 std::cerr << "avcodec_flush_buffers " << errbuf << std::endl;
 avcodec_flush_buffers(pCodecCtx);
 // Handle specific error cases
 if (ret == AVERROR(EAGAIN))
 {
 std::cerr << "EAGAIN indicates that more input is required" << std::endl;
 }
 else if (ret == AVERROR_EOF)
 {
 std::cerr << "AVERROR_EOF indicates that the encoder has been fully flushed" << std::endl;
 }
 else
 {
 //std::cerr << "avcodec_flush_buffers " << errbuf << std::endl;
 // For other errors, you may choose to flush the codec context and continue decoding.
 //avcodec_flush_buffers(pCodecCtx);
 }
 }
 ret = avcodec_receive_frame(pCodecCtx, pFrame);
 if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
 {
 av_strerror(ret, errbuf, sizeof(errbuf));

 std::cerr << "Error receiving packet for avcodec_receive_frame: " << errbuf << std::endl;


 // EAGAIN indicates that more frames are needed or EOF is reached.
 // You may choose to break out of the loop or handle it based on your application's logic.

 return msp;
 }
 else if (ret < 0)
 {
 av_strerror(ret, errbuf, sizeof(errbuf));
 std::cerr << "Error receiving frame for avcodec_receive_frame: " << errbuf << std::endl;
 // Optionally, handle specific error cases
 if (ret == AVERROR(EINVAL))
 {
 std::cerr << "EINVAL indicates that more input is required" << std::endl;

 //break;
 }
 else
 {
 std::cerr << "For other errors" << std::endl;

 //break;
 }
 }
 // Move memory allocation outside the loop if frame size is constant
 size_t bufferSize = static_cast(pPict->linesize[0]) * pCodecCtx->height;
 msp.frame = new unsigned char[bufferSize];
 msp.lsize = pPict->linesize[0];
 msp.sx = pCodecCtx->width;
 msp.sy = pCodecCtx->height;
 msp.fps = pFPS;
 sws_scale(pSwsCtx, (uint8_t const* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pPict->data, pPict->linesize);
 std::memcpy(msp.frame, pBuffer, bufferSize);
 //delete[] msp.frame;
 }

 // Unref packet for non-video streams
 av_packet_unref(pPacket);
 }

 return msp;
}

main.cpp

std::vector streams =
{
 {"rtsp://1},
 {"rtsp://2},
 {"rtsp://3},
 {"rtsp://4},
};

std::vector<rymediasource> mediaSources;

void main()
{
 int key = 0;
 int channel = 0;
 std::vector streamFrame(streams.size());
 ryMediaSource::mediaSourceParams msp = { 0, 0, 0, 0.0, nullptr };

 for (const auto& stream : streams)
 {
 mediaSources.emplace_back(); // Create a new instance for each stream
 }
 for (size_t s = 0; s < streams.size(); s++)
 {
 try
 {
 mediaSources[s].ryOpenMediaSource(streams[s]);
 }
 catch (const std::exception& e)
 {
 std::cerr << "Error initializing stream " << s << ": " << e.what() << std::endl;
 }
 }

 cv::namedWindow("ryInferenceServer", cv::WINDOW_FREERATIO);
 cv::resizeWindow("ryInferenceServer", 640, 480);
 cv::moveWindow("ryInferenceServer", 0, 0);
 for (;;)
 {
 for (size_t st = 0; st < mediaSources.size(); ++st)
 {
 msp = mediaSources[st].ryGetMediaSourceFrame();
 if (msp.frame != nullptr)
 {
 cv::Mat preview;
 cv::Mat frame(msp.sy, msp.sx, CV_8UC3, msp.frame, msp.lsize);
 cv::resize(frame, preview, cv::Size(640, 480));
 if (!frame.empty())
 {
 try
 {
 streamFrame[st] = frame.clone();
 if (channel == st)
 {
 cv::imshow("ryInferenceServer", preview);
 key = cv::waitKeyEx(1);
 if (key == LEFT_KEY)
 {
 channel--;
 if (channel < 0)
 channel = 0;
 }
 if (key == RIGHT_KEY)
 {
 channel++;
 if (channel >= mediaSources.size())
 channel = mediaSources.size() - 1;
 }
 if (key == 27)
 break;
 }
 streamFrame[st].release();
 delete[] msp.frame;
 }
 catch (const std::exception& e)
 {
 std::cerr << "Exception in processing frame for stream " << st << ": " << e.what() << std::endl;
 }
 }
 frame.release();
 }
 }
 }
}
</rymediasource></int></mutex></thread></chrono></vector></string></iostream>