Recherche avancée

Médias (0)

Mot : - Tags -/auteurs

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (53)

  • MediaSPIP v0.2

    21 juin 2013, par

    MediaSPIP 0.2 est la première version de MediaSPIP stable.
    Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • MediaSPIP version 0.1 Beta

    16 avril 2011, par

    MediaSPIP 0.1 beta est la première version de MediaSPIP décrétée comme "utilisable".
    Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
    Pour avoir une installation fonctionnelle, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
    Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...)

  • Personnaliser les catégories

    21 juin 2013, par

    Formulaire de création d’une catégorie
    Pour ceux qui connaissent bien SPIP, une catégorie peut être assimilée à une rubrique.
    Dans le cas d’un document de type catégorie, les champs proposés par défaut sont : Texte
    On peut modifier ce formulaire dans la partie :
    Administration > Configuration des masques de formulaire.
    Dans le cas d’un document de type média, les champs non affichés par défaut sont : Descriptif rapide
    Par ailleurs, c’est dans cette partie configuration qu’on peut indiquer le (...)

Sur d’autres sites (11151)

  • Maintaining exact aspect ratio when scaling videos using ffmpeg

    31 mai 2012, par Skkard

    I have a mkv video, which is a mix of multiple resolution recordings, e.g. I have the first few seconds of widescreen 16:9 (1024x576) resolution, and the rest of the video if 4:3 (768x576) resolution. I want to scale this video down 3 times, while copying all the other attributes (audio codec, subtitles etc.). I use ffmpeg -i  -vf scale=iw/2:-1 -acodec copy . Also, VLC detects it's resolution as 720x576.

    The problem is that after the scaling, the resolution constantly becomes 4:3 (360x288). How can I maintain the dynamic aspect ratio of the input video file i.e. the 16:9 parts to scale to 16:9, while the 4:3 parts scale to 4:3 ?

    update

    The player size actually changes, atleast in mplayer, when the resolution is switched. I figured out the main problem. It seems each frame is tagged with a Sample Aspect Ratio (SAR), so when the player plays it, it can find the display aspect ratio. This SAR value isn't getting copied over when encoding to MKV. When encoding to MPG, it does get copied over and I get an exact copy, with the player switching sizes, but not with MKV.

    Output of ffprobe -show_streams filename :

    ffprobe version 0.10.3 Copyright (c) 2007-2012 the FFmpeg developers
     built on May  9 2012 17:51:07 with gcc 4.7.0 20120505 (prerelease)
     configuration: --prefix=/usr --enable-libmp3lame --enable-libvorbis --enable-libxvid --enable-libx264 --enable-libvpx --enable-libtheora --enable-libgsm --enable-libspeex --enable-postproc --enable-shared --enable-x11grab --enable-libopencore_amrnb --enable-libopencore_amrwb --enable-libschroedinger --enable-libopenjpeg --enable-librtmp --enable-libpulse --enable-gpl --enable-version3 --enable-runtime-cpudetect --disable-debug --disable-static
     libavutil      51. 35.100 / 51. 35.100
     libavcodec     53. 61.100 / 53. 61.100
     libavformat    53. 32.100 / 53. 32.100
     libavdevice    53.  4.100 / 53.  4.100
     libavfilter     2. 61.100 /  2. 61.100
     libswscale      2.  1.100 /  2.  1.100
     libswresample   0.  6.100 /  0.  6.100
     libpostproc    52.  0.100 / 52.  0.100
    Input #0, matroska,webm, from 'sample.mkv':
     Metadata:
       title           : Pan prstenu. Dve veze
     Duration: 00:00:29.80, start: 0.000000, bitrate: 3124 kb/s
       Stream #0:0(eng): Video: mpeg2video (Main), yuv420p, 720x576 [SAR 64:45 DAR 16:9], 15000 kb/s, 25 fps, 25 tbr, 1k tbn, 50 tbc (default)
       Stream #0:1(cze): Audio: mp2, 48000 Hz, stereo, s16, 128 kb/s (default)
    [STREAM]
    index=0
    codec_name=mpeg2video
    codec_long_name=MPEG-2 video
    codec_type=video
    codec_time_base=1/50
    codec_tag_string=[0][0][0][0]
    codec_tag=0x0000
    width=720
    height=576
    has_b_frames=1
    sample_aspect_ratio=64:45
    display_aspect_ratio=16:9
    pix_fmt=yuv420p
    level=8
    timecode=16:35:19:10
    id=N/A
    r_frame_rate=25/1
    avg_frame_rate=25/1
    time_base=1/1000
    start_time=0.000000
    duration=N/A
    nb_frames=N/A
    TAG:language=eng
    [/STREAM]
    [STREAM]
    index=1
    codec_name=mp2
    codec_long_name=MP2 (MPEG audio layer 2)
    codec_type=audio
    codec_time_base=1/48000
    codec_tag_string=[0][0][0][0]
    codec_tag=0x0000
    sample_fmt=s16
    sample_rate=48000
    channels=2
    bits_per_sample=0
    id=N/A
    r_frame_rate=0/0
    avg_frame_rate=125/3
    time_base=1/1000
    start_time=0.000000
    duration=N/A
    nb_frames=N/A
    TAG:language=cze
    [/STREAM]
  • FFMPEG Incorrect frame size, How to resolve ?

    7 juin 2012, par RAGOpoR

    running on CentOS 5.6, is it possible to fix this issues ?

    [ragopor@xs1 livestream]$ ffmpeg -i hello.mp4 -f mpegts -acodec libmp3lame -ar 48000 -ab 64k -s 320×240 -vcodec libx264 -b 96k -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71 -bt 200k -maxrate 96k -bufsize 96k -rc_eq 'blurCplx^(1-qComp)' -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -aspect 320:240 -g 30 -async 2 helloout.mp4
    FFmpeg version 0.6.1, Copyright (c) 2000-2010 the FFmpeg developers
     built on Dec  4 2010 15:35:31 with gcc 4.1.2 20080704 (Red Hat 4.1.2-48)
     configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --incdir=/usr/include --disable-avisynth --extra-cflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC' --enable-avfilter --enable-avfilter-lavf --enable-libdirac --enable-libfaac --enable-libfaad --enable-libfaadbin --enable-libgsm --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-vdpau --enable-version3 --enable-x11grab
     libavutil     50.15. 1 / 50.15. 1
     libavcodec    52.72. 2 / 52.72. 2
     libavformat   52.64. 2 / 52.64. 2
     libavdevice   52. 2. 0 / 52. 2. 0
     libavfilter    1.19. 0 /  1.19. 0
     libswscale     0.11. 0 /  0.11. 0
     libpostproc   51. 2. 0 / 51. 2. 0

    Seems stream 1 codec frame rate differs from container frame rate: 59.83 (29917/500) -> 29.92 (29917/1000)
    Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'hello.mp4':
     Metadata:
       major_brand     : mp42
       minor_version   : 0
       compatible_brands: isomavc1mp42
     Duration: 00:02:47.66, start: 0.000000, bitrate: 644 kb/s
       Stream #0.0(und): Audio: aac, 44100 Hz, stereo, s16, 102 kb/s
       Stream #0.1(und): Video: h264, yuv420p, 480x320 [PAR 1:1 DAR 3:2], 539 kb/s, 29.92 fps, 29.92 tbr, 29917 tbn, 59.83 tbc
    Incorrect frame size
    [ragopor@xs1 livestream]$
  • Problems with Streaming a Multicast RTSP Stream with Live555

    16 juin 2014, par ALM865

    I am having trouble setting up a Multicast RTSP session using Live555. The examples included with Live555 are mostly irrelevant as they deal with reading in files and my code differs because it reads in encoded frames generated from a FFMPEG thread within my own program (no pipes, no saving to disk, it is genuinely passing pointers to memory that contain the encoded frames for Live555 to stream).

    My Live555 project that uses a custom Server Media Subsession so that I can receive data from an FFMPEG thread within my program (instead of Live555’s default reading from a file, yuk !). This is a requirement of my program as it reads in a GigEVision stream in one thread, sends the decoded raw RGB packets to the FFMPEG thread, which then in turn sends the encoded frames off to Live555 for RTSP streaming.

    For the life of me I can’t work out how to send the RTSP stream as multicast instead of unicast !

    Just a note, my program works perfectly at the moment streaming Unicast, so there is nothing wrong with my Live555 implementation (before you go crazy picking out irrelevant errors !). I just need to know how to modify my existing code to stream Multicast instead of Unicast.

    My program is way too big to upload and share so I’m just going to share the important bits :

    Live_AnalysingServerMediaSubsession.h

    #ifndef _ANALYSING_SERVER_MEDIA_SUBSESSION_HH
    #define _ANALYSING_SERVER_MEDIA_SUBSESSION_HH

    #include
    #include "Live_AnalyserInput.h"

    class AnalysingServerMediaSubsession: public OnDemandServerMediaSubsession {

    public:
     static AnalysingServerMediaSubsession*
     createNew(UsageEnvironment& env, AnalyserInput& analyserInput, unsigned estimatedBitrate,
           Boolean iFramesOnly = False,
               double vshPeriod = 5.0
               /* how often (in seconds) to inject a Video_Sequence_Header,
                  if one doesn't already appear in the stream */);

    protected: // we're a virtual base class
     AnalysingServerMediaSubsession(UsageEnvironment& env, AnalyserInput& AnalyserInput, unsigned estimatedBitrate, Boolean iFramesOnly, double vshPeriod);
     virtual ~AnalysingServerMediaSubsession();

    protected:
     AnalyserInput& fAnalyserInput;
     unsigned fEstimatedKbps;

    private:
     Boolean fIFramesOnly;
     double fVSHPeriod;

     // redefined virtual functions
     virtual FramedSource* createNewStreamSource(unsigned clientSessionId, unsigned& estBitrate);
     virtual RTPSink* createNewRTPSink(Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource);

    };

    #endif

    And "Live_AnalysingServerMediaSubsession.cpp"

    #include "Live_AnalysingServerMediaSubsession.h"
    #include
    #include
    #include

    AnalysingServerMediaSubsession* AnalysingServerMediaSubsession::createNew(UsageEnvironment& env, AnalyserInput& wisInput, unsigned estimatedBitrate,
       Boolean iFramesOnly,
       double vshPeriod) {
           return new AnalysingServerMediaSubsession(env, wisInput, estimatedBitrate,
               iFramesOnly, vshPeriod);
    }

    AnalysingServerMediaSubsession
       ::AnalysingServerMediaSubsession(UsageEnvironment& env, AnalyserInput& analyserInput,   unsigned estimatedBitrate, Boolean iFramesOnly, double vshPeriod)
       : OnDemandServerMediaSubsession(env, True /*reuse the first source*/),

       fAnalyserInput(analyserInput), fIFramesOnly(iFramesOnly), fVSHPeriod(vshPeriod) {
           fEstimatedKbps = (estimatedBitrate + 500)/1000;

    }

    AnalysingServerMediaSubsession
       ::~AnalysingServerMediaSubsession() {
    }

    FramedSource* AnalysingServerMediaSubsession ::createNewStreamSource(unsigned /*clientSessionId*/, unsigned& estBitrate) {
       estBitrate = fEstimatedKbps;

       // Create a framer for the Video Elementary Stream:
       //LOG_MSG("Create Net Stream Source [%d]", estBitrate);

       return MPEG1or2VideoStreamDiscreteFramer::createNew(envir(), fAnalyserInput.videoSource());
    }

    RTPSink* AnalysingServerMediaSubsession ::createNewRTPSink(Groupsock* rtpGroupsock, unsigned char /*rtpPayloadTypeIfDynamic*/, FramedSource* /*inputSource*/) {
       setVideoRTPSinkBufferSize();
       /*
       struct in_addr destinationAddress;
       destinationAddress.s_addr = inet_addr("239.255.12.42");

       rtpGroupsock->addDestination(destinationAddress,8888);
       rtpGroupsock->multicastSendOnly();
       */
       return MPEG1or2VideoRTPSink::createNew(envir(), rtpGroupsock);
    }

    Live_AnalyserSouce.h

    #ifndef _ANALYSER_SOURCE_HH
    #define _ANALYSER_SOURCE_HH

    #ifndef _FRAMED_SOURCE_HH
    #include "FramedSource.hh"
    #endif

    class FFMPEG;

    // The following class can be used to define specific encoder parameters
    class AnalyserParameters {
    public:
     FFMPEG * Encoding_Source;
    };

    class AnalyserSource: public FramedSource {
    public:
     static AnalyserSource* createNew(UsageEnvironment& env, FFMPEG * E_Source);
     static unsigned GetRefCount();


    public:
     static EventTriggerId eventTriggerId;

    protected:
     AnalyserSource(UsageEnvironment& env, FFMPEG *  E_Source);
     // called only by createNew(), or by subclass constructors
     virtual ~AnalyserSource();

    private:
     // redefined virtual functions:
     virtual void doGetNextFrame();

    private:
     static void deliverFrame0(void* clientData);
     void deliverFrame();


    private:
     static unsigned referenceCount; // used to count how many instances of this class currently exist
     FFMPEG * Encoding_Source;

     unsigned int Last_Sent_Frame_ID;
    };

    #endif

    Live_AnalyserSource.cpp

    #include "Live_AnalyserSource.h"
    #include  // for "gettimeofday()"
    #include "FFMPEGClass.h"

    AnalyserSource* AnalyserSource::createNew(UsageEnvironment& env, FFMPEG * E_Source) {
     return new AnalyserSource(env, E_Source);
    }


    EventTriggerId AnalyserSource::eventTriggerId = 0;

    unsigned AnalyserSource::referenceCount = 0;

    AnalyserSource::AnalyserSource(UsageEnvironment& env, FFMPEG * E_Source) : FramedSource(env), Encoding_Source(E_Source) {
     if (referenceCount == 0) {
       // Any global initialization of the device would be done here:

     }
     ++referenceCount;

     // Any instance-specific initialization of the device would be done here:
     Last_Sent_Frame_ID = 0;

     /* register us with the Encoding thread so we'll get notices when new frame data turns up.. */
     Encoding_Source->RegisterRTSP_Source(&(env.taskScheduler()), this);

     // We arrange here for our "deliverFrame" member function to be called
     // whenever the next frame of data becomes available from the device.
     //
     // If the device can be accessed as a readable socket, then one easy way to do this is using a call to
     //     envir().taskScheduler().turnOnBackgroundReadHandling( ... )
     // (See examples of this call in the "liveMedia" directory.)
     //
     // If, however, the device *cannot* be accessed as a readable socket, then instead we can implement is using 'event triggers':
     // Create an 'event trigger' for this device (if it hasn't already been done):
     if (eventTriggerId == 0) {
       eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
     }
    }

    AnalyserSource::~AnalyserSource() {
     // Any instance-specific 'destruction' (i.e., resetting) of the device would be done here:

     /* de-register this source from the Encoding thread, since we no longer need notices.. */
     Encoding_Source->Un_RegisterRTSP_Source(this);

     --referenceCount;
     if (referenceCount == 0) {
       // Any global 'destruction' (i.e., resetting) of the device would be done here:

       // Reclaim our 'event trigger'
       envir().taskScheduler().deleteEventTrigger(eventTriggerId);
       eventTriggerId = 0;
     }

    }

    unsigned AnalyserSource::GetRefCount() {
     return referenceCount;
    }

    void AnalyserSource::doGetNextFrame() {
     // This function is called (by our 'downstream' object) when it asks for new data.
     //LOG_MSG("Do Next Frame..");
     // Note: If, for some reason, the source device stops being readable (e.g., it gets closed), then you do the following:
     //if (0 /* the source stops being readable */ /*%%% TO BE WRITTEN %%%*/) {
     unsigned int FrameID = Encoding_Source->GetFrameID();
     if (FrameID == 0){
       //LOG_MSG("No Data. Close");
       handleClosure(this);
       return;
     }



     // If a new frame of data is immediately available to be delivered, then do this now:
     if (Last_Sent_Frame_ID != FrameID){
       deliverFrame();
       //DEBUG_MSG("Frame ID: %d",FrameID);
     }

     // No new data is immediately available to be delivered.  We don't do anything more here.
     // Instead, our event trigger must be called (e.g., from a separate thread) when new data becomes available.
    }

    void AnalyserSource::deliverFrame0(void* clientData) {
     ((AnalyserSource*)clientData)->deliverFrame();
    }

    void AnalyserSource::deliverFrame() {

     if (!isCurrentlyAwaitingData()) return; // we're not ready for the data yet


     static u_int8_t* newFrameDataStart;
     static unsigned newFrameSize = 0;

     /* get the data frame from the Encoding thread.. */
     if (Encoding_Source->GetFrame(&newFrameDataStart, &newFrameSize, &Last_Sent_Frame_ID)){
       if (newFrameDataStart!=NULL) {
           /* This should never happen, but check anyway.. */
           if (newFrameSize > fMaxSize) {
             fFrameSize = fMaxSize;
             fNumTruncatedBytes = newFrameSize - fMaxSize;
           } else {
             fFrameSize = newFrameSize;
           }
           gettimeofday(&fPresentationTime, NULL); // If you have a more accurate time - e.g., from an encoder - then use that instead.
           // If the device is *not* a 'live source' (e.g., it comes instead from a file or buffer), then set "fDurationInMicroseconds" here.
           /* move the data to be sent off.. */
           memmove(fTo, newFrameDataStart, fFrameSize);

           /* release the Mutex we had on the Frame's buffer.. */
           Encoding_Source->ReleaseFrame();
       }
       else {
           //AM Added, something bad happened
           //ALTRACE("LIVE555: FRAME NULL\n");
           fFrameSize=0;
           fTo=NULL;
           handleClosure(this);
       }
     }
     else {
       //LOG_MSG("Closing Connection due to Frame Error..");
       handleClosure(this);
     }


     // After delivering the data, inform the reader that it is now available:
     FramedSource::afterGetting(this);
    }

    Live_AnalyserInput.cpp

    #include "Live_AnalyserInput.h"
    #include "Live_AnalyserSource.h"


    ////////// WISInput implementation //////////

    AnalyserInput* AnalyserInput::createNew(UsageEnvironment& env, FFMPEG *Encoder) {
     if (!fHaveInitialized) {
       //if (!initialize(env)) return NULL;
       fHaveInitialized = True;
     }

     return new AnalyserInput(env, Encoder);
    }


    FramedSource* AnalyserInput::videoSource() {
     if (fOurVideoSource == NULL || AnalyserSource::GetRefCount() == 0) {
       fOurVideoSource = AnalyserSource::createNew(envir(), m_Encoder);
     }
     return fOurVideoSource;
    }


    AnalyserInput::AnalyserInput(UsageEnvironment& env, FFMPEG *Encoder): Medium(env), m_Encoder(Encoder) {
    }

    AnalyserInput::~AnalyserInput() {
     /* When we get destroyed, make sure our source is also destroyed.. */
     if (fOurVideoSource != NULL && AnalyserSource::GetRefCount() != 0) {
       AnalyserSource::handleClosure(fOurVideoSource);
     }
    }




    Boolean AnalyserInput::fHaveInitialized = False;
    int AnalyserInput::fOurVideoFileNo = -1;
    FramedSource* AnalyserInput::fOurVideoSource = NULL;

    Live_AnalyserInput.h

    #ifndef _ANALYSER_INPUT_HH
    #define _ANALYSER_INPUT_HH

    #include
    #include "FFMPEGClass.h"


    class AnalyserInput: public Medium {
    public:
     static AnalyserInput* createNew(UsageEnvironment& env, FFMPEG *Encoder);

     FramedSource* videoSource();

    private:
     AnalyserInput(UsageEnvironment& env, FFMPEG *Encoder); // called only by createNew()
     virtual ~AnalyserInput();

    private:
     friend class WISVideoOpenFileSource;
     static Boolean fHaveInitialized;
     static int fOurVideoFileNo;
     static FramedSource* fOurVideoSource;
     FFMPEG *m_Encoder;
    };

    // Functions to set the optimal buffer size for RTP sink objects.
    // These should be called before each RTPSink is created.
    #define VIDEO_MAX_FRAME_SIZE 300000
    inline void setVideoRTPSinkBufferSize() { OutPacketBuffer::maxSize = VIDEO_MAX_FRAME_SIZE; }

    #endif

    And finally the relevant code from my Live555 worker thread that starts the whole process :

       Stop_RTSP_Loop=0;
       //  MediaSession     *ms;
       TaskScheduler    *scheduler;
       UsageEnvironment *env ;
       //  RTSPClient       *rtsp;
       //  MediaSubsession  *Video_Sub;

       char RTSP_Address[1024];
       RTSP_Address[0]=0x00;

       if (m_Encoder == NULL){
           //DEBUG_MSG("No Video Encoder registered for the RTSP Encoder");
           return 0;
       }

       scheduler = BasicTaskScheduler::createNew();
       env = BasicUsageEnvironment::createNew(*scheduler);

       UserAuthenticationDatabase* authDB = NULL;
    #ifdef ACCESS_CONTROL
       // To implement client access control to the RTSP server, do the following:

       if (m_Enable_Pass){
           authDB = new UserAuthenticationDatabase;
           authDB->addUserRecord(UserN, PassW);
       }
       ////////// authDB = new UserAuthenticationDatabase;
       ////////// authDB->addUserRecord((char*)"Admin", (char*)"Admin"); // replace these with real strings
       // Repeat the above with each <username>, <password> that you wish to allow
       // access to the server.
    #endif

       // Create the RTSP server:
       RTSPServer* rtspServer = RTSPServer::createNew(*env, 554, authDB);
       ServerMediaSession* sms;

       AnalyserInput* inputDevice;


       if (rtspServer == NULL) {
           TRACE("LIVE555: Failed to create RTSP server: %s\n", env->getResultMsg());
           return 0;
       }
       else {
           char const* descriptionString = "Session streamed by \"IMC Server\"";



           // Initialize the WIS input device:
           inputDevice = AnalyserInput::createNew(*env, m_Encoder);
           if (inputDevice == NULL) {
               TRACE("Live555: Failed to create WIS input device\n");
               return 0;
           }
           else {
               // A MPEG-1 or 2 video elementary stream:
               /* Increase the buffer size so we can handle the high res stream.. */
               OutPacketBuffer::maxSize = 300000;
               // NOTE: This *must* be a Video Elementary Stream; not a Program Stream
               sms = ServerMediaSession::createNew(*env, RTSP_Address, RTSP_Address, descriptionString);

               //sms->addSubsession(MPEG1or2VideoFileServerMediaSubsession::createNew(*env, inputFileName, reuseFirstSource, iFramesOnly));

               sms->addSubsession(AnalysingServerMediaSubsession::createNew(*env, *inputDevice, m_Encoder->Get_Bitrate()));
               //sms->addSubsession(WISMPEG1or2VideoServerMediaSubsession::createNew(sms->envir(), inputDevice, videoBitrate));

               rtspServer->addServerMediaSession(sms);

               //announceStream(rtspServer, sms, streamName, inputFileName);
               //LOG_MSG("Play this stream using the URL %s", rtspServer->rtspURL(sms));

           }
       }

       Stop_RTSP_Loop=0;

       for (;;)
       {
           /* The actual work is all carried out inside the LIVE555 Task scheduler */
           env->taskScheduler().doEventLoop(&amp;Stop_RTSP_Loop); // does not return

           if (mStop) {
               break;
           }
       }

       Medium::close(rtspServer); // will also reclaim "sms" and its "ServerMediaSubsession"s
       Medium::close(inputDevice);
    </password></username>