Recherche avancée

Médias (0)

Mot : - Tags -/api

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (111)

  • Creating farms of unique websites

    13 avril 2011, par

    MediaSPIP platforms can be installed as a farm, with a single "core" hosted on a dedicated server and used by multiple websites.
    This allows (among other things) : implementation costs to be shared between several different projects / individuals rapid deployment of multiple unique sites creation of groups of like-minded sites, making it possible to browse media in a more controlled and selective environment than the major "open" (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

  • Contribute to a better visual interface

    13 avril 2011

    MediaSPIP is based on a system of themes and templates. Templates define the placement of information on the page, and can be adapted to a wide range of uses. Themes define the overall graphic appearance of the site.
    Anyone can submit a new graphic theme or template and make it available to the MediaSPIP community.

Sur d’autres sites (9850)

  • How to display raw H264 stream in C#

    10 août 2021, par forDream

    I read the raw H264 stream via socket, how should I display it with as low latency as possible ? For example, display it in PictureBox.

    


    I tried to solve this problem with FFmpeg.AutoGen, but I'm new for ffmpeg. Or is there another easy way for me to display the raw H264 stream.

    


    I also tried https://github.com/ZeBobo5/Vlc.DotNet, but the VLC must have an uncancelable buffer that causes a delay of about 1 to 2 seconds in the display of the stream.

    


            public static unsafe void test(Socket mSocket)
        {
            ffmpeg.av_register_all();

            var pCodec = ffmpeg.avcodec_find_decoder(AVCodecID.AV_CODEC_ID_H264);
            
            var pCodecContext = ffmpeg.avcodec_alloc_context3(pCodec);

            var code = ffmpeg.avcodec_open2(pCodecContext, pCodec, null).ThrowExceptionIfError("avcodec_open2");
          
            var pFrame = ffmpeg.av_frame_alloc();
            var pReceivedFrame = ffmpeg.av_frame_alloc();
            AVPacket* pPacket = ffmpeg.av_packet_alloc();
            var buffer = new byte[4096];

            while (mSocket.Connected)
            {
                ffmpeg.av_frame_unref(pFrame);
                ffmpeg.av_frame_unref(pReceivedFrame);
                ffmpeg.av_packet_unref(pPacket);
                var len = mSocket.Receive(buffer, buffer.Length, SocketFlags.None);
                fixed (byte* s = buffer)
                {
                    pPacket->size = len;
                    pPacket->data = s;
                }
                if (len > 0)
                {
                    ffmpeg.avcodec_send_packet(pCodecContext, pPacket).ThrowExceptionIfError("avcodec_send_packet"); // <-- throws an exception: Invalid data found when processing input
                    ffmpeg.av_packet_unref(pPacket);

                    ffmpeg.avcodec_receive_frame(pCodecContext, pFrame).ThrowExceptionIfError("avcodec_receive_frame");

                    // some code to display the image to picture box

                    Trace.WriteLine($"width:{pFrame->width},height:{pFrame->height},{pFrame->data}");

                }

            }
            ffmpeg.av_frame_free(&pFrame);
        }


    


  • FFMPEG : unable to open display. No such file or directory on ubuntu

    3 juin 2021, par Muaaz Khalid

    I'm trying to create a video by merging a couple of images together. This is the command I'm trying with.

    


    Note : the command works fine on another server. and I'm just migrating the software to the new server and here it doesn't work as expected

    


    ffmpeg -loop 1 -t 5.0000 -r 30 -i /tmp/video/img_6726.png -loop 1 -t 5.0000 -r 30 -i /tmp/video/img_6725.png -loop 1 -t 5.0000 -r 30 -i /tmp/video/img_6724.png -loop 1 -t 5.0000 -r 30 -i /tmp/video/img_6723.png -i /tmp/video/1610806386.jpeg  -filter_complex "[0:v]trim=start=0:duration=5.0000,setpts=PTS-STARTPTS[v0_t];[1:v]fade=t=in:st=0:d=1[v1]; [2:v]fade=t=in:st=0:d=1[v2]; [3:v]fade=t=in:st=0:d=1[v3]; [v0_t][v1][v2][v3] concat=n=4:v=1:a=0,format=yuv420p[v];[v][4:v]overlay=x=35:y=((main_h-overlay_h)-35)[out_v];amerge=inputs=0 [out_m_tts]" -map "[out_v]"  -map "[out_m_tts]"  -ac 2 /tmp/video/video.mp4


    


    and as mentioned in the title, I get the following error

    


    Error: unable to open display 
ffmpeg version n4.3.1 Copyright (c) 2000-2020 the FFmpeg developers
  built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
  configuration: --prefix= --prefix=/usr --disable-debug --disable-doc --disable-static --enable-cuda --enable-cuda-sdk --enable-cuvid --enable-libdrm --enable-ffplay --enable-gnutls --enable-gpl --enable-libass --enable-libfdk-aac --enable-libfontconfig --enable-libfreetype --enable-libmp3lame --enable-libnpp --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libopus --enable-libpulse --enable-sdl2 --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libv4l2 --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxcb --enable-libxvid --enable-nonfree --enable-nvenc --enable-omx --enable-openal --enable-opencl --enable-runtime-cpudetect --enable-shared --enable-vaapi --enable-vdpau --enable-version3 --enable-xlib
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
/tmp/video/img_6726.png: No such file or directory


    


    I'm running the above command as a root user and this file exists with 777 permission /tmp/video/img_6726.png

    


    enter image description here

    


    Versions

    


    Ubuntu : 18.04.2 LTS

    


    FFMPEG : n4.3.1

    


    FFMPEG was installed via following command snap install ffmpeg

    


  • Live555 Multicast RTSP Server - Display in FFMPEG

    31 mai 2021, par BanB

    I made a multicast RTSP server using the Live555 library.
    
It is displayed by VLC but not from VMS (NX Witness) or FFMPEG.
    
If I use the Unicast, I can display it, but I do not know how to change the file that is streaming in the Unicast.
    
Is there a way to change the streaming file in Unicast or make it viewable in FFMPEG with Multicast ?

    


    This is my code.

    


    


    Header

    


    


    #include <iostream>&#xA;#include <string>&#xA;#include <filesystem>&#xA;#include <fstream>&#xA;#include <ctime>&#xA;#include &#xA;#include &#xA;#include  &#xA;#include <opencv2></opencv2>videoio.hpp>&#xA;#include <opencv2></opencv2>imgcodecs.hpp>&#xA;&#xA;void Start(const char* streamPath, int port);&#xA;void AddDataL(BYTE* data, int length);&#xA;AddDataHW(BYTE* data, int height, int width);&#xA;&#xA;void Play();&#xA;void AfterPlaying(void*);&#xA;&#xA;void EncodeToH264(cv::Mat image);&#xA;long long GetTimeMs();&#xA;int GetFileCount();&#xA;const char* GetFirstFilePath();&#xA;&#xA;const int fourcc = cv::VideoWriter::fourcc(&#x27;x&#x27;, &#x27;2&#x27;, &#x27;6&#x27;, &#x27;4&#x27;);&#xA;&#xA;UsageEnvironment* env;&#xA;H264ServerMediaSubsession* h264;&#xA;H264VideoStreamFramer* videoSource;&#xA;RTPSink* videoSink;&#xA;&#xA;const char* dataPath = "data/";&#xA;const char* extension = ".h264";&#xA;</ctime></fstream></filesystem></string></iostream>

    &#xA;

    &#xA;

    CPP

    &#xA;

    &#xA;

    void Start()&#xA;{&#xA;    askScheduler *scheduler = BasicTaskScheduler::createNew();&#xA;    env = BasicUsageEnvironment::createNew(*scheduler);&#xA;&#xA;    RTSPServer *rtspServer = RTSPServer::createNew(*env, 8554);&#xA;&#xA;    if (rtspServer == NULL)&#xA;        throw;&#xA;&#xA;    struct sockaddr_storage destinationAddress;&#xA;    destinationAddress.ss_family = AF_INET;&#xA;    ((struct sockaddr_in &amp;)destinationAddress).sin_addr.s_addr = chooseRandomIPv4SSMAddress(*env);&#xA;&#xA;    const unsigned short rtpPortNum = 18888;&#xA;    const unsigned short rtcpPortNum = rtpPortNum &#x2B; 1;&#xA;    const unsigned char ttl = 255;&#xA;&#xA;    const Port rtpPort(rtpPortNum);&#xA;    const Port rtcpPort(rtcpPortNum);&#xA;&#xA;    Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);&#xA;    rtpGroupsock.multicastSendOnly();&#xA;    Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);&#xA;    rtcpGroupsock.multicastSendOnly();&#xA;&#xA;    OutPacketBuffer::maxSize = 300000;&#xA;    videoSink = H264VideoRTPSink::createNew(*env, &amp;rtpGroupsock, 96);&#xA;&#xA;    const unsigned estimatedSessionBandwidth = 5000;&#xA;    const unsigned maxCNAMElen = 100;&#xA;    unsigned char CNAME[maxCNAMElen &#x2B; 1];&#xA;    gethostname((char *)CNAME, maxCNAMElen);&#xA;    CNAME[maxCNAMElen] = &#x27;\0&#x27;;&#xA;&#xA;    RTCPInstance *rtcp = RTCPInstance::createNew(*env, &amp;rtcpGroupsock, estimatedSessionBandwidth, CNAME, videoSink, NULL, True);&#xA;&#xA;    ServerMediaSession *sms = ServerMediaSession::createNew(*env, "live", "live", "Session streamed by \"Test Server\"", True);&#xA;    sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));&#xA;    rtspServer->addServerMediaSession(sms);&#xA;&#xA;    *env &lt;&lt; "Start > " &lt;&lt; rtspServer->rtspURL(sms) &lt;&lt; "\n";&#xA;&#xA;    Play();&#xA;&#xA;    env->taskScheduler().doEventLoop();&#xA;}&#xA;&#xA;void AddData(BYTE* data, int length)&#xA;{&#xA;    EncodeToH264(cv::imdecode(cv::Mat(1, length, CV_8UC3, data), 1));&#xA;}&#xA;&#xA;void EncodeToH264(cv::Mat image)&#xA;{&#xA;    std::string fileName = dataPath &#x2B; std::to_string(GetTimeMs()) &#x2B; extension;&#xA;    cv::VideoWriter writer(fileName.c_str(), fourcc, 1, image.size());&#xA;    writer.write(image);&#xA;    writer.write(image);&#xA;    writer.release();&#xA;}&#xA;&#xA;void Play()&#xA;{&#xA;    while (GetFileCount() == 0);&#xA;&#xA;    const char* fileName = GetFirstFilePath();&#xA;&#xA;    ByteStreamFileSource* fileSource;&#xA;    while ((fileSource = ByteStreamFileSource::createNew(*env, fileName)) == NULL);&#xA;&#xA;    FramedSource* videoES = fileSource;&#xA;&#xA;    videoSource = H264VideoStreamFramer::createNew(*env, videoES);&#xA;&#xA;    videoSink->startPlaying(*videoSource, AfterPlaying, videoSink);&#xA;}&#xA;&#xA;void AfterPlaying(void*)&#xA;{&#xA;    videoSink->stopPlaying();&#xA;    Medium::close(videoSource);&#xA;&#xA;    if (GetFileCount() > 1)&#xA;        std::filesystem::remove(GetFirstFilePath());&#xA;&#xA;    Play();&#xA;}&#xA;&#xA;long long GetTimeMs()&#xA;{&#xA;    return std::chrono::duration_cast(std::chrono::system_clock::now().time_since_epoch()).count();&#xA;}&#xA;&#xA;int GetFileCount() &#xA;{&#xA;    return std::distance(std::filesystem::directory_iterator(dataPath), std::filesystem::directory_iterator());&#xA;}&#xA;&#xA;const char* GetFirstFilePath()&#xA;{&#xA;    for (const auto&amp; entry : std::filesystem::directory_iterator(dataPath))&#xA;        return entry.path().string().c_str();&#xA;}&#xA;

    &#xA;