Recherche avancée

Médias (0)

Mot : - Tags -/presse-papier

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (67)

  • Participer à sa traduction

    10 avril 2011

    Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
    Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
    Actuellement MediaSPIP n’est disponible qu’en français et (...)

  • Support de tous types de médias

    10 avril 2011

    Contrairement à beaucoup de logiciels et autres plate-formes modernes de partage de documents, MediaSPIP a l’ambition de gérer un maximum de formats de documents différents qu’ils soient de type : images (png, gif, jpg, bmp et autres...) ; audio (MP3, Ogg, Wav et autres...) ; vidéo (Avi, MP4, Ogv, mpg, mov, wmv et autres...) ; contenu textuel, code ou autres (open office, microsoft office (tableur, présentation), web (html, css), LaTeX, Google Earth) (...)

  • MediaSPIP Player : problèmes potentiels

    22 février 2011, par

    Le lecteur ne fonctionne pas sur Internet Explorer
    Sur Internet Explorer (8 et 7 au moins), le plugin utilise le lecteur Flash flowplayer pour lire vidéos et son. Si le lecteur ne semble pas fonctionner, cela peut venir de la configuration du mod_deflate d’Apache.
    Si dans la configuration de ce module Apache vous avez une ligne qui ressemble à la suivante, essayez de la supprimer ou de la commenter pour voir si le lecteur fonctionne correctement : /** * GeSHi (C) 2004 - 2007 Nigel McNie, (...)

Sur d’autres sites (15321)

  • Android FFmpeg Video Recording Delete Last Recorded Part

    17 avril 2015, par user3587194

    I’m trying to do exactly what this picture shows.

    Anyway, how can I delete part of a video ? The code I was testing is on github.

    It uses a progress bar so that when you record the progress bar will move, and keep them in separate segments. What is confusing to me is trying to figure out where and how to grab each segment to see if I want to delete that segment or not.

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {

       long frameTimeStamp = 0L;

       if (mAudioTimestamp == 0L && firstTime > 0L)
           frameTimeStamp = 1000L * (System.currentTimeMillis() - firstTime);

       else if (mLastAudioTimestamp == mAudioTimestamp)
           frameTimeStamp = mAudioTimestamp + frameTime;

       else {
           long l2 = (System.nanoTime() - mAudioTimeRecorded) / 1000L;
           frameTimeStamp = l2 + mAudioTimestamp;
           mLastAudioTimestamp = mAudioTimestamp;
       }

       synchronized (mVideoRecordLock) {
           //Log.e("recorder", "mVideoRecordLock " + mVideoRecordLock);

           if (recording && rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null) {

               if (isFirstFrame) {
                   isFirstFrame = false;
                   firstData = data;
               }

               totalTime = System.currentTimeMillis() - firstTime - pausedTime - ((long) (1.0 / (double) frameRate) * 1000);

               if (lastSavedframe != null && !deleteEnabled) {
                   deleteEnabled = true;
                   deleteBtn.setVisibility(View.VISIBLE);
                   cancelBtn.setVisibility(View.GONE);
               }

               if (!nextEnabled && totalTime >= recordingChangeTime) {
                   Log.e("recording", "totalTime >= recordingChangeTime " + totalTime + " " + recordingChangeTime);
                   nextEnabled = true;
                   nextBtn.setVisibility(View.VISIBLE);
               }

               if (nextEnabled && totalTime >= recordingMinimumTime) {
                   mHandler.sendEmptyMessage(5);
               }

               if (currentRecorderState == RecorderState.PRESS && totalTime >= recordingChangeTime) {
                   currentRecorderState = RecorderState.LOOSEN;
                   mHandler.sendEmptyMessage(2);
               }              
               mVideoTimestamp += frameTime;

               if (lastSavedframe.getTimeStamp() > mVideoTimestamp)
                   mVideoTimestamp = lastSavedframe.getTimeStamp();

               try {
                   yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
                   videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
                   videoRecorder.record(yuvIplImage);

               } catch (com.googlecode.javacv.FrameRecorder.Exception e) {
                       e.printStackTrace();
               }

           }
           byte[] tempData = rotateYUV420Degree90(data, previewWidth, previewHeight);

           if (cameraSelection == 1)
               tempData = rotateYUV420Degree270(data, previewWidth, previewHeight);
           lastSavedframe = new SavedFrames(tempData, frameTimeStamp);
           //Log.e("recorder", "lastSavedframe " + lastSavedframe);
           }
       }
    }






    public class Util {

    public static ContentValues videoContentValues = null;

    public static String getRecordingTimeFromMillis(long millis) {

       String strRecordingTime = null;

       int seconds = (int) (millis / 1000);
       int minutes = seconds / 60;
       int hours = minutes / 60;

       if (hours >= 0 && hours < 10)
           strRecordingTime = "0" + hours + ":";
       else
           strRecordingTime = hours + ":";

       if (hours > 0)
           minutes = minutes % 60;

       if (minutes >= 0 && minutes < 10)
           strRecordingTime += "0" + minutes + ":";
       else
           strRecordingTime += minutes + ":";

       seconds = seconds % 60;

       if (seconds >= 0 && seconds < 10)
           strRecordingTime += "0" + seconds ;
       else
           strRecordingTime += seconds ;

       return strRecordingTime;
    }

    public static int determineDisplayOrientation(Activity activity, int defaultCameraId) {

       int displayOrientation = 0;

       if (Build.VERSION.SDK_INT > Build.VERSION_CODES.FROYO) {
           CameraInfo cameraInfo = new CameraInfo();
           Camera.getCameraInfo(defaultCameraId, cameraInfo);

           int degrees  = getRotationAngle(activity);

           if (cameraInfo.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
               displayOrientation = (cameraInfo.orientation + degrees) % 360;
               displayOrientation = (360 - displayOrientation) % 360;

           } else {
               displayOrientation = (cameraInfo.orientation - degrees + 360) % 360;
           }
       }
       return displayOrientation;
    }

    public static int getRotationAngle(Activity activity) {

       int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
       int degrees  = 0;

       switch (rotation) {
       case Surface.ROTATION_0:
           degrees = 0;
           break;

       case Surface.ROTATION_90:
           degrees = 90;
           break;

       case Surface.ROTATION_180:
           degrees = 180;
           break;

       case Surface.ROTATION_270:
           degrees = 270;
           break;
       }
       return degrees;
    }

    public static int getRotationAngle(int rotation) {

       int degrees  = 0;

       switch (rotation) {
       case Surface.ROTATION_0:
           degrees = 0;
           break;

       case Surface.ROTATION_90:
           degrees = 90;
           break;

       case Surface.ROTATION_180:
           degrees = 180;
           break;

       case Surface.ROTATION_270:
           degrees = 270;
           break;
       }
       return degrees;
    }

    public static String createImagePath(Context context){
       long dateTaken = System.currentTimeMillis();
       String title = Constants.FILE_START_NAME + dateTaken;
       String filename = title + Constants.IMAGE_EXTENSION;

       String dirPath = Environment.getExternalStorageDirectory()+"/Android/data/" + context.getPackageName()+"/video";
       File file = new File(dirPath);

       if(!file.exists() || !file.isDirectory())
           file.mkdirs();

       String filePath = dirPath + "/" + filename;
       return filePath;
    }

    public static String createFinalPath(Context context) {
       Log.e("util", "createFinalPath");
       long dateTaken = System.currentTimeMillis();
       String title = Constants.FILE_START_NAME + dateTaken;
       String filename = title + Constants.VIDEO_EXTENSION;
       String filePath = genrateFilePath(context, String.valueOf(dateTaken), true, null);

       ContentValues values = new ContentValues(7);
       values.put(Video.Media.TITLE, title);
       values.put(Video.Media.DISPLAY_NAME, filename);
       values.put(Video.Media.DATE_TAKEN, dateTaken);
       values.put(Video.Media.MIME_TYPE, "video/3gpp");
       values.put(Video.Media.DATA, filePath);
       videoContentValues = values;

       Log.e("util", "filePath " + filePath);
       return filePath;
    }

    public static void deleteTempVideo(Context context) {
       final String filePath = Environment.getExternalStorageDirectory() + "/Android/data/" + context.getPackageName() + "/video";
       new Thread(new Runnable() {

           @Override
           public void run() {
               File file = new File(filePath);
               if (file != null && file.isDirectory()) {
                   Log.e("util", "file.isDirectory() " + file.isDirectory());
                   for (File file2 : file.listFiles()) {
                       Log.e("util", "file.listFiles() " + file.listFiles());
                       file2.delete();
                   }
               }
           }
       }).start();
    }

    private static String genrateFilePath(Context context,String uniqueId, boolean isFinalPath, File tempFolderPath) {
       String fileName = Constants.FILE_START_NAME + uniqueId + Constants.VIDEO_EXTENSION;
       String dirPath = Environment.getExternalStorageDirectory() + "/Android/data/" + context.getPackageName() + "/video";

       if (isFinalPath) {
           File file = new File(dirPath);
           if (!file.exists() || !file.isDirectory())
               file.mkdirs();
       } else
           dirPath = tempFolderPath.getAbsolutePath();
       String filePath = dirPath + "/" + fileName;
       return filePath;
    }

    public static String createTempPath(Context context, File tempFolderPath ) {
       long dateTaken = System.currentTimeMillis();
       String filePath = genrateFilePath(context,String.valueOf(dateTaken), false, tempFolderPath);
       return filePath;
    }

    public static File getTempFolderPath() {
       File tempFolder = new File(Constants.TEMP_FOLDER_PATH +"_" +System.currentTimeMillis());
       return tempFolder;
    }

    public static List getResolutionList(Camera camera) {
       Parameters parameters = camera.getParameters();
       List previewSizes = parameters.getSupportedPreviewSizes();
       return previewSizes;
    }

    public static RecorderParameters getRecorderParameter(int currentResolution) {
       RecorderParameters parameters = new RecorderParameters();
       if (currentResolution ==  Constants.RESOLUTION_HIGH_VALUE) {
           parameters.setAudioBitrate(128000);
           parameters.setVideoQuality(0);

       } else if (currentResolution ==  Constants.RESOLUTION_MEDIUM_VALUE) {
           parameters.setAudioBitrate(128000);
           parameters.setVideoQuality(5);

       } else if (currentResolution == Constants.RESOLUTION_LOW_VALUE) {
           parameters.setAudioBitrate(96000);
           parameters.setVideoQuality(20);
       }
       return parameters;
    }

    public static int calculateMargin(int previewWidth, int screenWidth) {

       int margin = 0;

       if (previewWidth <= Constants.RESOLUTION_LOW) {
           margin = (int) (screenWidth*0.12);

       } else if (previewWidth > Constants.RESOLUTION_LOW && previewWidth <= Constants.RESOLUTION_MEDIUM) {
           margin = (int) (screenWidth*0.08);

       } else if (previewWidth > Constants.RESOLUTION_MEDIUM && previewWidth <= Constants.RESOLUTION_HIGH) {
           margin = (int) (screenWidth*0.08);
       }
       return margin;
    }

    public static int setSelectedResolution(int previewHeight) {

       int selectedResolution = 0;

       if(previewHeight <= Constants.RESOLUTION_LOW) {
           selectedResolution = 0;

       } else if (previewHeight > Constants.RESOLUTION_LOW && previewHeight <= Constants.RESOLUTION_MEDIUM) {
           selectedResolution = 1;

       } else if (previewHeight > Constants.RESOLUTION_MEDIUM && previewHeight <= Constants.RESOLUTION_HIGH) {
           selectedResolution = 2;
       }
       return selectedResolution;
    }

    public static class ResolutionComparator implements Comparator {

       @Override
       public int compare(Camera.Size size1, Camera.Size size2) {

           if(size1.height != size2.height)
               return size1.height -size2.height;
           else
               return size1.width - size2.width;
       }
    }


    public static void concatenateMultipleFiles(String inpath, String outpath)
    {
       File Folder = new File(inpath);
       File files[];
       files = Folder.listFiles();

       if(files.length>0)
       {
           for(int i = 0;ilibencoding.so";
    }

    private static HashMap getMetaData()
    {
       HashMap localHashMap = new HashMap();
       localHashMap.put("creation_time", new SimpleDateFormat("yyyy_MM_dd_HH_mm_ss_SSSZ").format(new Date()));
       return localHashMap;
    }

    public static int getTimeStampInNsFromSampleCounted(int paramInt) {
       return (int)(paramInt / 0.0441D);
    }

    /*public static void saveReceivedFrame(SavedFrames frame) {

       File cachePath = new File(frame.getCachePath());
       BufferedOutputStream bos;

       try {
           bos = new BufferedOutputStream(new FileOutputStream(cachePath));
           if (bos != null) {
               bos.write(frame.getFrameBytesData());
               bos.flush();
               bos.close();
           }

       } catch (FileNotFoundException e) {
           e.printStackTrace();
           cachePath = null;

       } catch (IOException e) {
           e.printStackTrace();
           cachePath = null;
       }
    }*/

    public static Toast showToast(Context context, String textMessage, int timeDuration) {

       if (null == context) {
           return null;
       }

       textMessage = (null == textMessage ? "Oops! " : textMessage.trim());
       Toast t = Toast.makeText(context, textMessage, timeDuration);
       t.show();
       return t;
    }

    public static void showDialog(Context context, String title, String content, int type, final Handler handler) {
       final Dialog dialog = new Dialog(context, R.style.Dialog_loading);
       dialog.setCancelable(true);

       LayoutInflater inflater = LayoutInflater.from(context);
       View view = inflater.inflate(R.layout.global_dialog_tpl, null);

       Button confirmButton = (Button) view.findViewById(R.id.setting_account_bind_confirm);
       Button cancelButton = (Button) view.findViewById(R.id.setting_account_bind_cancel);

       TextView dialogTitle = (TextView) view.findViewById(R.id.global_dialog_title);

       View line_hori_center = view.findViewById(R.id.line_hori_center);
       confirmButton.setVisibility(View.GONE);
       line_hori_center.setVisibility(View.GONE);
       TextView textView = (TextView) view.findViewById(R.id.setting_account_bind_text);

       Window dialogWindow = dialog.getWindow();
       WindowManager.LayoutParams lp = dialogWindow.getAttributes();
       lp.width = (int) (context.getResources().getDisplayMetrics().density*288);
       dialogWindow.setAttributes(lp);

       if(type != 1 && type != 2){
           type = 1;
       }
       dialogTitle.setText(title);
       textView.setText(content);

       if(type == 1 || type == 2){
           confirmButton.setVisibility(View.VISIBLE);
           confirmButton.setOnClickListener(new OnClickListener(){
               @Override
               public void onClick(View v){
                   if(handler != null){
                       Message msg = handler.obtainMessage();
                       msg.what = 1;
                       handler.sendMessage(msg);
                   }
                   dialog.dismiss();
               }
           });
       }
       // 取消按钮事件
       if(type == 2){
           cancelButton.setVisibility(View.VISIBLE);
           line_hori_center.setVisibility(View.VISIBLE);
           cancelButton.setOnClickListener(new OnClickListener(){
               @Override
               public void onClick(View v){
                   if(handler != null){
                       Message msg = handler.obtainMessage();
                       msg.what = 0;
                       handler.sendMessage(msg);
                   }
                   dialog.dismiss();
               }
           });
       }
       dialog.addContentView(view, new LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT));
       dialog.setCancelable(true);// 点击返回键关闭
       dialog.setCanceledOnTouchOutside(true);// 点击外部关闭
       dialog.show();
    }

    public IplImage getFrame(String filePath) {
       Log.e("util", "getFrame" + filePath);
       CvCapture capture = cvCreateFileCapture(filePath);
       Log.e("util", "capture " + capture);
       IplImage image = cvQueryFrame(capture);
       Log.e("util", "image " + image);
       return image;
    }
  • Problems with Streaming a Multicast RTSP Stream with Live555

    16 juin 2014, par ALM865

    I am having trouble setting up a Multicast RTSP session using Live555. The examples included with Live555 are mostly irrelevant as they deal with reading in files and my code differs because it reads in encoded frames generated from a FFMPEG thread within my own program (no pipes, no saving to disk, it is genuinely passing pointers to memory that contain the encoded frames for Live555 to stream).

    My Live555 project that uses a custom Server Media Subsession so that I can receive data from an FFMPEG thread within my program (instead of Live555’s default reading from a file, yuk !). This is a requirement of my program as it reads in a GigEVision stream in one thread, sends the decoded raw RGB packets to the FFMPEG thread, which then in turn sends the encoded frames off to Live555 for RTSP streaming.

    For the life of me I can’t work out how to send the RTSP stream as multicast instead of unicast !

    Just a note, my program works perfectly at the moment streaming Unicast, so there is nothing wrong with my Live555 implementation (before you go crazy picking out irrelevant errors !). I just need to know how to modify my existing code to stream Multicast instead of Unicast.

    My program is way too big to upload and share so I’m just going to share the important bits :

    Live_AnalysingServerMediaSubsession.h

    #ifndef _ANALYSING_SERVER_MEDIA_SUBSESSION_HH
    #define _ANALYSING_SERVER_MEDIA_SUBSESSION_HH

    #include
    #include "Live_AnalyserInput.h"

    class AnalysingServerMediaSubsession: public OnDemandServerMediaSubsession {

    public:
     static AnalysingServerMediaSubsession*
     createNew(UsageEnvironment& env, AnalyserInput& analyserInput, unsigned estimatedBitrate,
           Boolean iFramesOnly = False,
               double vshPeriod = 5.0
               /* how often (in seconds) to inject a Video_Sequence_Header,
                  if one doesn't already appear in the stream */);

    protected: // we're a virtual base class
     AnalysingServerMediaSubsession(UsageEnvironment& env, AnalyserInput& AnalyserInput, unsigned estimatedBitrate, Boolean iFramesOnly, double vshPeriod);
     virtual ~AnalysingServerMediaSubsession();

    protected:
     AnalyserInput& fAnalyserInput;
     unsigned fEstimatedKbps;

    private:
     Boolean fIFramesOnly;
     double fVSHPeriod;

     // redefined virtual functions
     virtual FramedSource* createNewStreamSource(unsigned clientSessionId, unsigned& estBitrate);
     virtual RTPSink* createNewRTPSink(Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource);

    };

    #endif

    And "Live_AnalysingServerMediaSubsession.cpp"

    #include "Live_AnalysingServerMediaSubsession.h"
    #include
    #include
    #include

    AnalysingServerMediaSubsession* AnalysingServerMediaSubsession::createNew(UsageEnvironment& env, AnalyserInput& wisInput, unsigned estimatedBitrate,
       Boolean iFramesOnly,
       double vshPeriod) {
           return new AnalysingServerMediaSubsession(env, wisInput, estimatedBitrate,
               iFramesOnly, vshPeriod);
    }

    AnalysingServerMediaSubsession
       ::AnalysingServerMediaSubsession(UsageEnvironment& env, AnalyserInput& analyserInput,   unsigned estimatedBitrate, Boolean iFramesOnly, double vshPeriod)
       : OnDemandServerMediaSubsession(env, True /*reuse the first source*/),

       fAnalyserInput(analyserInput), fIFramesOnly(iFramesOnly), fVSHPeriod(vshPeriod) {
           fEstimatedKbps = (estimatedBitrate + 500)/1000;

    }

    AnalysingServerMediaSubsession
       ::~AnalysingServerMediaSubsession() {
    }

    FramedSource* AnalysingServerMediaSubsession ::createNewStreamSource(unsigned /*clientSessionId*/, unsigned& estBitrate) {
       estBitrate = fEstimatedKbps;

       // Create a framer for the Video Elementary Stream:
       //LOG_MSG("Create Net Stream Source [%d]", estBitrate);

       return MPEG1or2VideoStreamDiscreteFramer::createNew(envir(), fAnalyserInput.videoSource());
    }

    RTPSink* AnalysingServerMediaSubsession ::createNewRTPSink(Groupsock* rtpGroupsock, unsigned char /*rtpPayloadTypeIfDynamic*/, FramedSource* /*inputSource*/) {
       setVideoRTPSinkBufferSize();
       /*
       struct in_addr destinationAddress;
       destinationAddress.s_addr = inet_addr("239.255.12.42");

       rtpGroupsock->addDestination(destinationAddress,8888);
       rtpGroupsock->multicastSendOnly();
       */
       return MPEG1or2VideoRTPSink::createNew(envir(), rtpGroupsock);
    }

    Live_AnalyserSouce.h

    #ifndef _ANALYSER_SOURCE_HH
    #define _ANALYSER_SOURCE_HH

    #ifndef _FRAMED_SOURCE_HH
    #include "FramedSource.hh"
    #endif

    class FFMPEG;

    // The following class can be used to define specific encoder parameters
    class AnalyserParameters {
    public:
     FFMPEG * Encoding_Source;
    };

    class AnalyserSource: public FramedSource {
    public:
     static AnalyserSource* createNew(UsageEnvironment& env, FFMPEG * E_Source);
     static unsigned GetRefCount();


    public:
     static EventTriggerId eventTriggerId;

    protected:
     AnalyserSource(UsageEnvironment& env, FFMPEG *  E_Source);
     // called only by createNew(), or by subclass constructors
     virtual ~AnalyserSource();

    private:
     // redefined virtual functions:
     virtual void doGetNextFrame();

    private:
     static void deliverFrame0(void* clientData);
     void deliverFrame();


    private:
     static unsigned referenceCount; // used to count how many instances of this class currently exist
     FFMPEG * Encoding_Source;

     unsigned int Last_Sent_Frame_ID;
    };

    #endif

    Live_AnalyserSource.cpp

    #include "Live_AnalyserSource.h"
    #include  // for "gettimeofday()"
    #include "FFMPEGClass.h"

    AnalyserSource* AnalyserSource::createNew(UsageEnvironment& env, FFMPEG * E_Source) {
     return new AnalyserSource(env, E_Source);
    }


    EventTriggerId AnalyserSource::eventTriggerId = 0;

    unsigned AnalyserSource::referenceCount = 0;

    AnalyserSource::AnalyserSource(UsageEnvironment& env, FFMPEG * E_Source) : FramedSource(env), Encoding_Source(E_Source) {
     if (referenceCount == 0) {
       // Any global initialization of the device would be done here:

     }
     ++referenceCount;

     // Any instance-specific initialization of the device would be done here:
     Last_Sent_Frame_ID = 0;

     /* register us with the Encoding thread so we'll get notices when new frame data turns up.. */
     Encoding_Source->RegisterRTSP_Source(&(env.taskScheduler()), this);

     // We arrange here for our "deliverFrame" member function to be called
     // whenever the next frame of data becomes available from the device.
     //
     // If the device can be accessed as a readable socket, then one easy way to do this is using a call to
     //     envir().taskScheduler().turnOnBackgroundReadHandling( ... )
     // (See examples of this call in the "liveMedia" directory.)
     //
     // If, however, the device *cannot* be accessed as a readable socket, then instead we can implement is using 'event triggers':
     // Create an 'event trigger' for this device (if it hasn't already been done):
     if (eventTriggerId == 0) {
       eventTriggerId = envir().taskScheduler().createEventTrigger(deliverFrame0);
     }
    }

    AnalyserSource::~AnalyserSource() {
     // Any instance-specific 'destruction' (i.e., resetting) of the device would be done here:

     /* de-register this source from the Encoding thread, since we no longer need notices.. */
     Encoding_Source->Un_RegisterRTSP_Source(this);

     --referenceCount;
     if (referenceCount == 0) {
       // Any global 'destruction' (i.e., resetting) of the device would be done here:

       // Reclaim our 'event trigger'
       envir().taskScheduler().deleteEventTrigger(eventTriggerId);
       eventTriggerId = 0;
     }

    }

    unsigned AnalyserSource::GetRefCount() {
     return referenceCount;
    }

    void AnalyserSource::doGetNextFrame() {
     // This function is called (by our 'downstream' object) when it asks for new data.
     //LOG_MSG("Do Next Frame..");
     // Note: If, for some reason, the source device stops being readable (e.g., it gets closed), then you do the following:
     //if (0 /* the source stops being readable */ /*%%% TO BE WRITTEN %%%*/) {
     unsigned int FrameID = Encoding_Source->GetFrameID();
     if (FrameID == 0){
       //LOG_MSG("No Data. Close");
       handleClosure(this);
       return;
     }



     // If a new frame of data is immediately available to be delivered, then do this now:
     if (Last_Sent_Frame_ID != FrameID){
       deliverFrame();
       //DEBUG_MSG("Frame ID: %d",FrameID);
     }

     // No new data is immediately available to be delivered.  We don't do anything more here.
     // Instead, our event trigger must be called (e.g., from a separate thread) when new data becomes available.
    }

    void AnalyserSource::deliverFrame0(void* clientData) {
     ((AnalyserSource*)clientData)->deliverFrame();
    }

    void AnalyserSource::deliverFrame() {

     if (!isCurrentlyAwaitingData()) return; // we're not ready for the data yet


     static u_int8_t* newFrameDataStart;
     static unsigned newFrameSize = 0;

     /* get the data frame from the Encoding thread.. */
     if (Encoding_Source->GetFrame(&newFrameDataStart, &newFrameSize, &Last_Sent_Frame_ID)){
       if (newFrameDataStart!=NULL) {
           /* This should never happen, but check anyway.. */
           if (newFrameSize > fMaxSize) {
             fFrameSize = fMaxSize;
             fNumTruncatedBytes = newFrameSize - fMaxSize;
           } else {
             fFrameSize = newFrameSize;
           }
           gettimeofday(&fPresentationTime, NULL); // If you have a more accurate time - e.g., from an encoder - then use that instead.
           // If the device is *not* a 'live source' (e.g., it comes instead from a file or buffer), then set "fDurationInMicroseconds" here.
           /* move the data to be sent off.. */
           memmove(fTo, newFrameDataStart, fFrameSize);

           /* release the Mutex we had on the Frame's buffer.. */
           Encoding_Source->ReleaseFrame();
       }
       else {
           //AM Added, something bad happened
           //ALTRACE("LIVE555: FRAME NULL\n");
           fFrameSize=0;
           fTo=NULL;
           handleClosure(this);
       }
     }
     else {
       //LOG_MSG("Closing Connection due to Frame Error..");
       handleClosure(this);
     }


     // After delivering the data, inform the reader that it is now available:
     FramedSource::afterGetting(this);
    }

    Live_AnalyserInput.cpp

    #include "Live_AnalyserInput.h"
    #include "Live_AnalyserSource.h"


    ////////// WISInput implementation //////////

    AnalyserInput* AnalyserInput::createNew(UsageEnvironment& env, FFMPEG *Encoder) {
     if (!fHaveInitialized) {
       //if (!initialize(env)) return NULL;
       fHaveInitialized = True;
     }

     return new AnalyserInput(env, Encoder);
    }


    FramedSource* AnalyserInput::videoSource() {
     if (fOurVideoSource == NULL || AnalyserSource::GetRefCount() == 0) {
       fOurVideoSource = AnalyserSource::createNew(envir(), m_Encoder);
     }
     return fOurVideoSource;
    }


    AnalyserInput::AnalyserInput(UsageEnvironment& env, FFMPEG *Encoder): Medium(env), m_Encoder(Encoder) {
    }

    AnalyserInput::~AnalyserInput() {
     /* When we get destroyed, make sure our source is also destroyed.. */
     if (fOurVideoSource != NULL && AnalyserSource::GetRefCount() != 0) {
       AnalyserSource::handleClosure(fOurVideoSource);
     }
    }




    Boolean AnalyserInput::fHaveInitialized = False;
    int AnalyserInput::fOurVideoFileNo = -1;
    FramedSource* AnalyserInput::fOurVideoSource = NULL;

    Live_AnalyserInput.h

    #ifndef _ANALYSER_INPUT_HH
    #define _ANALYSER_INPUT_HH

    #include
    #include "FFMPEGClass.h"


    class AnalyserInput: public Medium {
    public:
     static AnalyserInput* createNew(UsageEnvironment& env, FFMPEG *Encoder);

     FramedSource* videoSource();

    private:
     AnalyserInput(UsageEnvironment& env, FFMPEG *Encoder); // called only by createNew()
     virtual ~AnalyserInput();

    private:
     friend class WISVideoOpenFileSource;
     static Boolean fHaveInitialized;
     static int fOurVideoFileNo;
     static FramedSource* fOurVideoSource;
     FFMPEG *m_Encoder;
    };

    // Functions to set the optimal buffer size for RTP sink objects.
    // These should be called before each RTPSink is created.
    #define VIDEO_MAX_FRAME_SIZE 300000
    inline void setVideoRTPSinkBufferSize() { OutPacketBuffer::maxSize = VIDEO_MAX_FRAME_SIZE; }

    #endif

    And finally the relevant code from my Live555 worker thread that starts the whole process :

       Stop_RTSP_Loop=0;
       //  MediaSession     *ms;
       TaskScheduler    *scheduler;
       UsageEnvironment *env ;
       //  RTSPClient       *rtsp;
       //  MediaSubsession  *Video_Sub;

       char RTSP_Address[1024];
       RTSP_Address[0]=0x00;

       if (m_Encoder == NULL){
           //DEBUG_MSG("No Video Encoder registered for the RTSP Encoder");
           return 0;
       }

       scheduler = BasicTaskScheduler::createNew();
       env = BasicUsageEnvironment::createNew(*scheduler);

       UserAuthenticationDatabase* authDB = NULL;
    #ifdef ACCESS_CONTROL
       // To implement client access control to the RTSP server, do the following:

       if (m_Enable_Pass){
           authDB = new UserAuthenticationDatabase;
           authDB->addUserRecord(UserN, PassW);
       }
       ////////// authDB = new UserAuthenticationDatabase;
       ////////// authDB->addUserRecord((char*)"Admin", (char*)"Admin"); // replace these with real strings
       // Repeat the above with each <username>, <password> that you wish to allow
       // access to the server.
    #endif

       // Create the RTSP server:
       RTSPServer* rtspServer = RTSPServer::createNew(*env, 554, authDB);
       ServerMediaSession* sms;

       AnalyserInput* inputDevice;


       if (rtspServer == NULL) {
           TRACE("LIVE555: Failed to create RTSP server: %s\n", env->getResultMsg());
           return 0;
       }
       else {
           char const* descriptionString = "Session streamed by \"IMC Server\"";



           // Initialize the WIS input device:
           inputDevice = AnalyserInput::createNew(*env, m_Encoder);
           if (inputDevice == NULL) {
               TRACE("Live555: Failed to create WIS input device\n");
               return 0;
           }
           else {
               // A MPEG-1 or 2 video elementary stream:
               /* Increase the buffer size so we can handle the high res stream.. */
               OutPacketBuffer::maxSize = 300000;
               // NOTE: This *must* be a Video Elementary Stream; not a Program Stream
               sms = ServerMediaSession::createNew(*env, RTSP_Address, RTSP_Address, descriptionString);

               //sms->addSubsession(MPEG1or2VideoFileServerMediaSubsession::createNew(*env, inputFileName, reuseFirstSource, iFramesOnly));

               sms->addSubsession(AnalysingServerMediaSubsession::createNew(*env, *inputDevice, m_Encoder->Get_Bitrate()));
               //sms->addSubsession(WISMPEG1or2VideoServerMediaSubsession::createNew(sms->envir(), inputDevice, videoBitrate));

               rtspServer->addServerMediaSession(sms);

               //announceStream(rtspServer, sms, streamName, inputFileName);
               //LOG_MSG("Play this stream using the URL %s", rtspServer->rtspURL(sms));

           }
       }

       Stop_RTSP_Loop=0;

       for (;;)
       {
           /* The actual work is all carried out inside the LIVE555 Task scheduler */
           env->taskScheduler().doEventLoop(&amp;Stop_RTSP_Loop); // does not return

           if (mStop) {
               break;
           }
       }

       Medium::close(rtspServer); // will also reclaim "sms" and its "ServerMediaSubsession"s
       Medium::close(inputDevice);
    </password></username>
  • Working on images asynchronously

    15 décembre 2013, par Mikko Koppanen — Imagick, PHP stuff

    To get my quota on buzzwords for the day we are going to look at using ZeroMQ and Imagick to create a simple asynchronous image processing system. Why asynchronous ? First of all, separating the image handling from a interactive PHP scripts allows us to scale the image processing separately from the web heads. For example we could do the image processing on separate servers, which have SSDs attached and more memory. In this example making the images available to all worker nodes is left to the reader.

    Secondly, separating the image processing from a web script can provide more responsive experience to the user. This doesn’t necessarily mean faster, but let’s say in a multiple image upload scenario this method allows the user to do something else on the site while we process the images in the background. This can be beneficial especially in cases where users upload hundreds of images at a time. To achieve a simple distributed image processing infrastructure we are going to use ZeroMQ for communicating between different components and Imagick to work on the images.

    The first part we are going to create is a simple “Worker” -process skeleton. Naturally for a live environment you would like to have more error handling and possibly use pcntl for process control, but for the sake of brewity the example is barebones :

    1. < ?php
    2.  
    3. define (’THUMBNAIL_ADDR’, ’tcp ://127.0.0.1:5000’) ;
    4. define (’COLLECTOR_ADDR’, ’tcp ://127.0.0.1:5001’) ;
    5.  
    6. class Worker {
    7.  
    8.   private $in ;
    9.   private $out ;
    10.  
    11.   public function __construct ($in_addr, $out_addr)
    12.   {
    13.     $context = new ZMQContext () ;
    14.  
    15.     $this->in = new ZMQSocket ($context, ZMQ: :SOCKET_PULL) ;
    16.     $this->in->bind ($in_addr) ;
    17.  
    18.     $this->out = new ZMQSocket ($context, ZMQ: :SOCKET_PUSH) ;
    19.     $this->out->connect ($out_addr) ;
    20.   }
    21.  
    22.   public function work () {
    23.     while ($command = $this->in->recvMulti ()) {
    24.       if (isset ($this->commands [$command [0]])) {
    25.         echo "Received work" . PHP_EOL ;
    26.  
    27.         $callback = $this->commands [$command [0]] ;
    28.  
    29.         array_shift ($command) ;
    30.         $response = call_user_func_array ($callback, $command) ;
    31.  
    32.         if (is_array ($response))
    33.           $this->out->sendMulti ($response) ;
    34.         else
    35.           $this->out->send ($response) ;
    36.       }
    37.       else {
    38.         error_log ("There is no registered worker for $command [0]") ;
    39.       }
    40.     }
    41.   }
    42.  
    43.   public function register ($command, $callback)
    44.   {
    45.     $this->commands [$command] = $callback ;
    46.   }
    47. }
    48.  ?>

    The Worker class allows us to register commands with callbacks associated with them. In our case the Worker class doesn’t actually care or know about the parameters being passed to the actual callback, it just blindly passes them on. We are using two separate sockets in this example, one for incoming work requests and one for passing the results onwards. This allows us to create a simple pipeline by adding more workers in the mix. For example we could first have a watermark worker, which takes the original image and composites a watermark on it, passes the file onwards to thumbnail worker, which then creates different sizes of thumbnails and passes the final results to event collector.

    The next part we are going to create a is a simple worker script that does the actual thumbnailing of the images :

    1. < ?php
    2. include __DIR__ . ’/common.php’ ;
    3.  
    4. // Create worker class and bind the inbound address to ’THUMBNAIL_ADDR’ and connect outbound to ’COLLECTOR_ADDR’
    5. $worker = new Worker (THUMBNAIL_ADDR, COLLECTOR_ADDR) ;
    6.  
    7. // Register our thumbnail callback, nothing special here
    8. $worker->register (’thumbnail’, function ($filename, $width, $height) {
    9.                   $info = pathinfo ($filename) ;
    10.  
    11.                   $out = sprintf ("%s/%s_%dx%d.%s",
    12.                           $info [’dirname’],
    13.                           $info [’filename’],
    14.                           $width,
    15.                           $height,
    16.                           $info [’extension’]) ;
    17.  
    18.                   $status = 1 ;
    19.                   $message = ’’ ;
    20.  
    21.                   try {
    22.                     $im = new Imagick ($filename) ;
    23.                     $im->thumbnailImage ($width, $height) ;
    24.                     $im->writeImage ($out) ;
    25.                   }
    26.                   catch (Exception $e) {
    27.                     $status = 0 ;
    28.                     $message = $e->getMessage () ;
    29.                   }
    30.  
    31.                   return array (
    32.                         ’status’  => $status,
    33.                         ’filename’ => $filename,
    34.                         ’thumbnail’ => $out,
    35.                         ’message’ => $message,
    36.                     ) ;
    37.                 }) ;
    38.  
    39. // Run the worker, will block
    40. echo "Running thumbnail worker.." . PHP_EOL ;
    41. $worker->work () ;

    As you can see from the code the thumbnail worker registers a callback for ‘thumbnail’ command. The callback does the thumbnailing based on input and returns the status, original filename and the thumbnail filename. We have connected our Workers “outbound” socket to event collector, which will receive the results from the thumbnail worker and do something with them. What the “something” is depends on you. For example you could push the response into a websocket to show immediate feeedback to the user or store the results into a database.

    Our example event collector will just do a var_dump on every event it receives from the thumbnailer :

    1. < ?php
    2. include __DIR__ . ’/common.php’ ;
    3.  
    4. $socket = new ZMQSocket (new ZMQContext (), ZMQ: :SOCKET_PULL) ;
    5. $socket->bind (COLLECTOR_ADDR) ;
    6.  
    7. echo "Waiting for events.." . PHP_EOL ;
    8. while (($message = $socket->recvMulti ())) {
    9.   var_dump ($message) ;
    10. }
    11.  ?>

    The final piece of the puzzle is the client that pumps messages into the pipeline. The client connects to the thumbnail worker, passes on filename and desired dimensions :

    1. < ?php
    2. include __DIR__ . ’/common.php’ ;
    3.  
    4. $socket = new ZMQSocket (new ZMQContext (), ZMQ: :SOCKET_PUSH) ;
    5. $socket->connect (THUMBNAIL_ADDR) ;
    6.  
    7. $socket->sendMulti (
    8.       array (
    9.         ’thumbnail’,
    10.         realpath (’./test.jpg’),
    11.         50,
    12.         50,
    13.       )
    14. ) ;
    15. echo "Sent request" . PHP_EOL ;
    16.  ?>

    After this our processing pipeline will look like this :

    simple-pipeline

    Now, if we notice that thumbnail workers or the event collectors can’t keep up with the rate of images we are pushing through we can start scaling the pipeline by adding more processes on each layer. ZeroMQ PUSH socket will automatically round-robin between all connected nodes, which makes adding more workers and event collectors simple. After adding more workers our pipeline will look like this :

    scaling-pipeline

    Using ZeroMQ also allows us to create more flexible architectures by adding forwarding devices in the middle, adding request-reply workers etc. So, the last thing to do is to run our pipeline and see the results :

    Let’s create our test image first :

    $ convert magick:rose test.jpg
    

    From the command-line run the thumbnail script :

    $ php thumbnail.php 
    Running thumbnail worker..
    

    In a separate terminal window run the event collector :

    $ php collector.php 
    Waiting for events..
    

    And finally run the client to send the thumbnail request :

    $ php client.php 
    Sent request
    $
    

    If everything went according to the plan you should now see the following output in the event collector window :

    array(4) 
      [0]=>
      string(1) "1"
      [1]=>
      string(56) "/test.jpg"
      [2]=>
      string(62) "/test_50x50.jpg"
      [3]=>
      string(0) ""
    
    

    Happy hacking !