Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (72)

  • Configurer la prise en compte des langues

    15 novembre 2010, par

    Accéder à la configuration et ajouter des langues prises en compte
    Afin de configurer la prise en compte de nouvelles langues, il est nécessaire de se rendre dans la partie "Administrer" du site.
    De là, dans le menu de navigation, vous pouvez accéder à une partie "Gestion des langues" permettant d’activer la prise en compte de nouvelles langues.
    Chaque nouvelle langue ajoutée reste désactivable tant qu’aucun objet n’est créé dans cette langue. Dans ce cas, elle devient grisée dans la configuration et (...)

  • Websites made ​​with MediaSPIP

    2 mai 2011, par

    This page lists some websites based on MediaSPIP.

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

Sur d’autres sites (7966)

  • How to compile FFmpeg with bash on Windows 10 ?

    10 août 2017, par user2980183

    In the Creators Update (1703), the Windows 10 bash linux (WSL) can run native windows tools. So, under Windows 10, you can now compile FFmpeg with Visual Studio and the required linux utilities, natively, without having to install MinGW or other interop crap software.

    Here is how I tried to compile FFmpeg (using MSVC 2017 toolchain) :

    • I started the bash shell from "Developer Command Prompt for VS 2017" to set PATH variable to find the MSVC compiler/linker.

    • I installed the "make" tool with the following command : "sudo apt-get install make".

    • To call Windows program from the bash shell, I have to type the command with its .exe extension. So, I must adapt the "configure" and "compat/windows/makedef" files to add ".exe" at the end of "cl", "link" (no confusion possible with the /usr/bin/link), "dumpbin" and "lib" executables :

    configure

    cl_major_ver=$(cl 2>&1 | sed -n 's/.*Version \([[:digit:]]\{1,\}\)\..*/\1/p')
       if [ -z "$cl_major_ver" ] || [ $cl_major_ver -ge 18 ]; then
           cc_default="cl.exe"
       else
           cc_default="c99wrap cl"
       fi
       ld_default="link.exe"
       nm_default="dumpbin.exe -symbols"
       ar_default="lib.exe"

    compat/windows/makedef

    lib.exe -out:${libname} $@ >/dev/null

    arch=$(dumpbin.exe -headers ${libname} |
      tr '\t' ' ' |
      grep '^ \+.\+machine \+(.\+)' |
      head -1 |
      sed -e 's/^ \{1,\}.\{1,\} \{1,\}machine \{1,\}(\(...\)).*/\1/')

    dump=$(dumpbin.exe -linkermember:1 ${libname})

    I hope the FFmpeg build tool chain will be adapted in the future to support natively compiling on bash on Windows...

    • If you want to set an absolute path for YASM in the "configure", you have to use the following pattern /mnt/[letter_partition]/path/yasm.exe.

    ./configure --toolchain=msvc --yasmexe='/mnt/e/Home/Important/Development/Toolkit/Tools/yasm-1.3.0-win32.exe' [all other settings you need]

    • The make command should generate the lib as expected.

    • I tried to type make install but I get the following error :
      No rule to make the target "libavutil\x86\x86util.asm", needed for "libavutil/x86/cpuid.o".

    I don’t know what is wrong...

    As you can see, building FFmpeg on bash under Windows is not very developer friendly yet. Compiling FFmpeg in Windows should be as easy as Linux. Do you know if there is an easier way to proceed ? Do you know if this compilation scenario will be officially supported in next versions of FFmpeg (I don’t find any roadmap) ?

    Thanks.

  • How to use LiveMedia streaming library to send videostream to specific URL ? [on hold]

    21 juillet 2017, par Lucky Man

    I have an IP camera in my local network and video streaming server somewhere in internet

    rtsp ://petproject.streaming.com:1935/go/dostreaming

    On local IP camera I have linux-based OS with rtsp streaming server

    rtsp ://10.0.10.74/stream1

    based on LiveMedia streaming library
    http://www.live555.com/liveMedia/

    I need to stream video from my camera to my server

    Currently I can do it this way :

    1. On IP camera. Run rtsp_server

    2. On my host computer that connected to IP camera ffmpeg -i
      rtsp ://10.0.10.74/stream1 -acodec copy -vcodec copy -f
      rtsp ://petproject.streaming.com:1935/go/dostreaming

    3. On my host. ffplay
      rtsp ://petproject.streaming.com:1935/go/transcoded_steam

    But now I want to stream directly to my server
    I’ve investigated sources of IP-cameras rtsp server and I know that it uses LiveMedia library
    It is supposed to be easy to modify it to stream directly to my server
    I guess I should use GroupSock in some special way specifying directly my server’s address
    But I don’t understand how to do it because I’m newbie in videostreaming and in networking as well

    Could somebody please advise me how I should use LiveMedia library to achieve my goal (streaming directly to URL rtsp ://petproject.streaming.com:1935/go/dostreaming) ?

    Any help would be greatly appreciated

    (P.S. I know that it is possible to reconfigure my server and open my camera’s IP to it.
    But anyway I want to do the way I’ve described)

    Piece of code :

    #include "liveMedia.hh"
    #include "BasicUsageEnvironment.hh"
    #include "GroupsockHelper.hh"
    #include "H264VideoFileServerMediaSubsession.hh"
    void create_live_stream(RTSPServer * rtspServer, int stream,
               UsageEnvironment * env_stream)
    {
           char streamName[32], inputFileName[32];
           sprintf(streamName, "stream%d", (stream+1));
           sprintf(inputFileName, "live_stream%d", (stream+1));

       ServerMediaSession * sms =
           ServerMediaSession::createNew(*env, streamName, streamName,
               descriptionString, is_ssm /*SSM*/);
       ServerMediaSubsession * subsession =
           H264VideoFileServerMediaSubsession::createNew(*env_stream,
               inputFileName, reuseFirstSource);

       sms->addSubsession(subsession);
       subsession->setServerAddressAndPortForSDP(0, clientPort);
       rtspServer->addServerMediaSession(sms);

       char *url = rtspServer->rtspURL(sms);
       *env << "Play this stream using the URL \"" << url << "\"\n";
       delete[] url;
    }

    void setup_streams(RTSPServer * rtspServer)
    {
       TaskScheduler * scheduler[MAX_ENCODE_STREAM_NUM];
       UsageEnvironment * env_stream[MAX_ENCODE_STREAM_NUM];
       int i;

       for (i = 0; i < MAX_ENCODE_STREAM_NUM; ++i) {
           scheduler[i] = BasicTaskScheduler::createNew();
           env_stream[i] = BasicUsageEnvironment::createNew(*scheduler[i]);
           create_live_stream(rtspServer, i, env_stream[i]);
       }
    }


    int main()
    {

       // Begin by setting up our usage environment:
       TaskScheduler* scheduler = BasicTaskScheduler::createNew();
       env = BasicUsageEnvironment::createNew(*scheduler);

       UserAuthenticationDatabase* authDB = NULL;

       // Create the RTSP server:
       RTSPServer* rtspServer = RTSPServer::createNew(*env, 554, authDB);
       if (rtspServer == NULL) {
           *env << "Failure during RTSP server creating: " << env->getResultMsg() << "\n";
           exit(1);
       }

       setup_streams(rtspServer);

        // does not return
       env->taskScheduler().doEventLoop();

       return 0;
    }
  • FFMPEG H.264 to JPEG for real time video

    18 juillet 2017, par Joe Quinn

    Any help appreciated.

    We are trans-coding H.264 streams into Jpgs which are sending across a web socket to the browser. The reason we are looking to do this is so we can deliver real time video to a browser natively. No need for plugins in a browser agnostic way. If there is a better way to do this then it would be great to know more. The Videos source are H.264 though and we cant change that.

    As we lower the FPS we are seeing a greater lag in the camera video feed. e.g at 1 FPS we see the video in the browser is 8 seconds behind. at 15 FPS the video is about 1 second behind. So even though at 1 FPS it updates every second the frame is 8 seconds behind.

    We think this is because FFMPEG with the lower frame rate has to wait longer for an I Frame and wont send a Jpeg to the web socket until it has a complete one. We would rather it sends a the Jpeg without having to wait till the IFrame arrives, we would rather see a partial image that gradually gets filled in on the browser. We cannot tolerate a lag of greater than 0.8 seconds in the browser. When the cameras are set to send MJPEG we see 0.250 seconds lag. With H.264 we see 1.25 seconds and we need to get that down to 0.8 seconds. So we really are looking to fine tune H.264 to shave off some time. That’s why when our first approach to lower the FPS made things worse we were surprised so wondering what else needs to be fine tuned in step with the FPS to get a good result.

    Is there any option to FFMPEG that tells it to send Jpegs as soon as the first piece of data arrives ? OR maybe we should look at other tuning avenues ?

    Here are the FFMPEG parameters :
    ffmpeg
    - buffer_size 1024000
    - r 15
    - i rtsp ://10.140.150.92/02441987-0826-4dc2-b9bd-62efdc0dd951/02441987-0826-4dc2-b9bd-62efdc0dd951_vs1 ?token=02441987-0826-4dc2-b9bd-62efdc0dd951^LVEAMOKTD^100^40^26^1500482113^a97effd2a6f85c4a0b5e93953b27c8e1eb40ca77&username=USER1
    - f image2
    - multiple_requests 1
    - icy 0
    - chunked_post 0
    - q:v 31
    - vsync 1
    - r 15
    - vf scale=640 :-1
    http://127.0.0.1:58014/video/cameraTag_deviceId_22cameraUid_-1scale_640 :-1cameraOrigin_requestedStreams_videostream1/frame-%03d.jpeg

    Many Thanks,
    Joe.