Recherche avancée

Médias (29)

Mot : - Tags -/Musique

Autres articles (14)

  • List of compatible distributions

    26 avril 2011, par

    The table below is the list of Linux distributions compatible with the automated installation script of MediaSPIP. Distribution nameVersion nameVersion number Debian Squeeze 6.x.x Debian Weezy 7.x.x Debian Jessie 8.x.x Ubuntu The Precise Pangolin 12.04 LTS Ubuntu The Trusty Tahr 14.04
    If you want to help us improve this list, you can provide us access to a machine whose distribution is not mentioned above or send the necessary fixes to add (...)

  • Encodage et transformation en formats lisibles sur Internet

    10 avril 2011

    MediaSPIP transforme et ré-encode les documents mis en ligne afin de les rendre lisibles sur Internet et automatiquement utilisables sans intervention du créateur de contenu.
    Les vidéos sont automatiquement encodées dans les formats supportés par HTML5 : MP4, Ogv et WebM. La version "MP4" est également utilisée pour le lecteur flash de secours nécessaire aux anciens navigateurs.
    Les documents audios sont également ré-encodés dans les deux formats utilisables par HTML5 :MP3 et Ogg. La version "MP3" (...)

  • Problèmes fréquents

    10 mars 2010, par

    PHP et safe_mode activé
    Une des principales sources de problèmes relève de la configuration de PHP et notamment de l’activation du safe_mode
    La solution consiterait à soit désactiver le safe_mode soit placer le script dans un répertoire accessible par apache pour le site

Sur d’autres sites (5427)

  • 5 Top Google Optimize Alternatives to Consider

    17 mars 2023, par Erin — Analytics Tips

    Google Optimize is a popular conversion rate optimization (CRO) tool from Alphabet (parent company of Google). With it, you can run A/B, multivariate, and redirect tests to figure out which web page designs perform best. 

    Google Optimize seamlessly integrates with Google Analytics (GA). It also has a free tier. So many marketers chose it as their default A/B testing tool…until recently. 

    Google will sunset Google Optimize by 30 September 2023

    Starting from this date, Google will no longer support Optimize and Optimize 360 (premium edition). All experiments, active after this date, will be paused automatically and you’ll no longer have access to your historical records (unless these are exported in advance).

    The better news is that you still have time to find a Google Optimize alternative — and this post will help you with that. 

    Disclaimer : Please note that the information provided in this blog post is for general informational purposes only and is not intended to provide legal advice. Every situation is unique and requires a specific legal analysis. If you have any questions regarding the legal implications of any matter, please consult with your legal team or seek advice from a qualified legal professional. 

    Best Google Optimize Alternatives 

    Google Optimize was among the first free A/B testing apps. But as with any product, it has some disadvantages. 

    Data updates happen every 24 hours, not in real-time. A free account has caps on the number of experiments. You cannot run more than 5 experiments at a time or implement over 16 combinations for multivariate testing (MVT). A premium version (Optimize 365) has fewer usage constraints, but it costs north of $150K per year. 

    Google Optimize has native integration with GA (of course), so you can review all the CRO data without switching apps. But Optimize doesn’t work well with Google Analytics alternatives, which many choose to use for privacy-friendly user tracking, higher data accuracy and GDPR compliance. 

    At the same time, many other conversion rate optimization (CRO) tools have emerged, often boasting better accuracy and more competitive features than Google Optimize.

    Here are 5 alternative A/B testing apps worth considering.

    Adobe Target 

    Adobe Target Homepage

    Adobe Target is an advanced personalization platform for optimising user and marketing experiences on digital properties. It uses machine learning algorithms to deliver dynamic content, personalised promotions and custom browsing experiences to visitors based on their behaviour and demographic data. 

    Adobe Target also provides A/B testing and multivariate testing (MVT) capabilities to help marketers test and refine their digital experiences.

    Key features : 

    • Visual experience builder for A/B tests setup and replication 
    • Full factorial multivariate tests and multi-armed bandit testing
    • Omnichannel personalisation across web properties 
    • Multiple audience segmentation and targeting options 
    • Personalised content, media and product recommendations 
    • Advanced customer intelligence (in conjunction with other Adobe products)

    Pros

    • Convenient A/B test design tool 
    • Acucate MVT and MAB results 
    • Powerful segmentation capabilities 
    • Access to extra behavioural analytics 
    • One-click personalisation activation 
    • Supports rules-based, location-based and contextual personalisation
    • Robust omnichannel analytics in conjunction with other Adobe products 

    Cons 

    • Requires an Adobe Marketing Cloud subscription 
    • No free trial or freemium tier 
    • More complex product setup and configuration 
    • Steep learning curve for new users 

    Price : On-demand. 

    Adobe Target is sold as part of Adobe Marketing Cloud. Licence costs vary, based on selected subscriptions and the number of users, but are typically above $10K.

    Google Optimize vs Adobe Target : The Verdict 

    Google Optimize comes with a free tier, unlike Adobe Target. It provides you with a basic builder for A/B and MVT tests, but none of the personalisation tools Adobe has. Because of ease-of-use and low price, other Google Optimize alternatives are better suited for small to medium-sized businesses, doing baseline CRO for funnel optimisation. 

    Adobe Target pulls you into the vast Adobe marketing ecosystem, offering omnipotent customer behaviour analytics, machine-learning-driven website optimisation, dynamic content recommendations, product personalisation and extensive reporting. The app is better suited for larger enterprises with a significant investment in digital marketing.

    Matomo A/B Testing

    Matomo A/B testing page

    Matomo A/B Testing is a CRO tool, integrated into Matomo. All Matomo Cloud users get instant access to it, while On-Premise (free) Matomo users can purchase A/B testing as a plugin

    With Matomo A/B Testing, you can create multiple variations of a web or mobile page and test them with different segments of their audience. Matomo also doesn’t have any strict experiment caps, unlike Google Optimize. 

    You can split-test multiple creative variants for on-site assets such as buttons, slogans, titles, call-to-actions, image positions and more. You can even benchmark the performance of two (or more !) completely different homepage designs, for instance. 

    With us, you can compliantly and ethically collect historical user data about any visitor, who’s entered any of the active tests — and monitor their entire customer journey. You can also leverage Matomo A/B Testing data as part of multi-touch attribution modelling to determine which channels bring the best leads and which assets drive them towards conversion. 

     

    Since Matomo A/B Testing is part of our analytics platform, it works well with other features such as goal tracking, heatmaps, user session recordings and more. 

    Key features

    • Run experiments for web, mobile, email and digital campaigns 
    • Convenient A/B test design interface 
    • One-click experiment scheduling 
    • Integration with historic visitor profiles
    • Near real-time conversion tracking 
    • Apply segmentation to Matomo reports 
    • Easy creative variation sharing via a URL 

    Pros

    • High data accuracy with no reporting gaps 
    • Monitor the evolution of your success metrics for each variation
    • Embed experiments across multiple digital channels 
    • Set a custom confidence threshold for winning variations 
    • No compromises on user privacy 
    • Free 21-day trial available (for Matomo Cloud) and free 30-day plugin trial (for Matomo On-Premise)

    Cons

    • No on-site personalisation tools available 
    • Configuration requires some coding experience 

    Price : Matomo A/B Testing is included in the monthly Cloud plan (starting at €19 per month). On-Premise users can buy this functionality as a plugin (starting at €199/year). 

    Google Optimize vs Matomo A/B Testing : The Verdict 

    Matomo offers the same types of A/B testing features as Google Optimize (and some extras !), but without any usage caps. Unlike Matomo, Google Optimize doesn’t support A/B tests for mobile apps. You can access some content testing features for Android Apps via Firebase, but this requires another subscription. 

    Matomo lets you run A/B experiments across the web and mobile properties, plus desktop apps, email campaigns and digital ads. Also, Matomo has higher conversion data accuracy, thanks to our privacy-focused method for collecting website analytics

    When using Matomo in most EU markets, you’re legally exempt from showing a cookie consent banner. Meaning you can collect richer insights for each experiment and make data-driven decisions. Nearly 40% of global consumers reject cookie consent banners. With most other tools, you won’t be getting the full picture of your traffic. 

    Optimizely 

    Optimizely homepage

    Optimizely is a conversion optimization platform that offers several competitive products for a separate subscription. These include a flexible content management system (CMS), a content marketing platform, a web A/B testing app, a mobile featuring testing product and two eCommerce-specific website management products.

    The Web Experimentation app allows you to optimise every customer touchpoint by scheduling unlimited split or multi-variant tests and conversions across all your projects from the same app. Apart from websites, this subscription also supports experiments for single-page applications. But if you want more advanced mobile app testing features, you’ll have to purchase another product — Feature Experimentation. 

    Key features :

    • Intuitive experiment design tool 
    • Cross-browser testing and experiment preview 
    • Multi-page funnel tests design 
    • Behavioural and geo-targeting 
    • Exit/bounce rate tracking
    • Custom audience builder for experiments
    • Comprehensive reporting 

    Pros

    • Unlimited number of concurrent experiments 
    • Upload your audience data for test optimisation 
    • Dynamic content personalisation available on a higher tier 
    • Pre-made integrations with popular heatmap and analytics tools 
    • Supports segmentation by device, campaign type, traffic sources or referrer 

    Cons

    • You need a separate subscription for mobile CRO 
    • Free trial not available, pricing on-demand 
    • Multiple licences and subscriptions may be required 
    • Doesn’t support A/B tests for emails 

    Price : Available on-demand. 

    Web Experimentation tool has three subscription tiers — Grow, Accelerate, and Scale with different features included. 

    Google Optimize vs Optimizely : The Verdict 

    Optimizely is a strong contender for Google Optimize alternative as it offers more advanced audience targeting and segmentation options. You can target users by IP address, cookies, traffic sources, device type, browser, language, location or a custom utm_campaign parameter.

    Similar to Matomo A/B testing, Optimizely doesn’t limit the number of projects or concurrent experiments you can do. But you have to immediately sign an annual contract (no monthly plans are available). Pricing also varies based on the number of processed impressions (more experiments = a higher annual bill). An annual licence can cost $63,700 for 10 million impressions on average, according to an independent estimate. 

    Visual Website Optimizer (VWO) 

    VWO is another popular experimentation platform, supporting web, mobile and server-side A/B testing and personalisation campaigns.

    Similar to others, VWO offers a drag-and-drop visual editor for creating campaign variants. You don’t need design or coding knowledge to create tests. Once you’re all set, the app will benchmark your experiment performance against expected conversion rates, report on differences in conversion rate and point towards the best-performing creative. 

    Similar to Optimizely, VWO also offers web/mobile app optimisation as a separate subscription. Apart from testing visual page elements, you can also run in-app experiments throughout the product stack to locate new revenue opportunities. For example, you can test in-app subscription flows, search algorithms or navigation flows to improve product UX. 

    Key features :

    • Multivariate and multi-arm bandit tests 
    • Multi-step (funnel) split tests 
    • Collaborative experiment tracking dashboard 
    • Target users by different attributes (URL, device, geo-data) 
    • Personal library of creative elements 
    • Funnel analytics, session records, and heatmaps available 

    Pros

    • Free starter plan is available (similar to Google Optimize)
    • Simple tracking code installation and easy code editor
    • Offers online reporting dashboards and report downloads 
    • Slice-and-dice reports by different audience dimensions
    • No impact on website/app loading speed and performance 

    Cons

    • Multivariate testing is only available on a higher-tier plan 
    • Annual contract required, despite monthly billing 
    • Mobile app A/B split tests require another licence 
    • Requires ongoing user training 

    Price : Free limited plan available. 

    Then from $356/month, billed annually. 

    Google Optimize vs VWO : The Verdict 

    The free plan on VWO is very similar to Google Optimize. You get access to A/B testing and split URL testing features for websites only. The visual editing tool is relatively simple — and you can use URL or device targeting. 

    Free VWO reports, however, lack the advertised depth in terms of behavioural or funnel-based reporting. In-depth insights are available only to premium users. Extra advertised features like heatmaps, form analytics and session recordings require yet another subscription. With Matomo Cloud, you get all three of these together with A/B testing. 

    ConvertFlow 

    ConvertFlow Homepage

    ConvertFlow markets itself as a funnel optimisation app for eCommerce and SaaS companies. It meshes lead generation tools with some CRO workflows. 

    With ConvertFlow, you can effortlessly design opt-in forms, pop-ups, quizzes and even entire landing pages using pre-made web elements and a visual builder. Afterwards, you can put all of these assets to a “field test” via the ConvertFlow CRO platform. Select among pre-made templates or create custom variants for split or multivariate testing. You can customise tests based on URLs, cookie data and user geolocation among other factors. 

    Similar to Adobe Target, ConvertFlow also allows you to run tests targeted at specific customer segments in your CRM. The app has native integrations with HubSpot and Salesforce, so this feature is easy to enable. ConvertFlow also offers advanced targeting and segmentation options, based on user on-site behaviour, demographics data or known interests.

    Key features :

    • Create and test landing pages, surveys, quizzes, pop-ups, surveys and other lead-gen assets. 
    • All-in-one funnel builder for creating demand-generation campaigns 
    • Campaign personalisation, based on on-site activity 
    • Re-usable dynamic visitor segments for targeting 
    • Multi-step funnel design and customisation 
    • Embedded forms for split testing CTAs on existing pages 

    Pros

    • Allows controlling the traffic split for each variant to get objective results 
    • Pre-made integration with Google Analytics and Google Tag Manager 
    • Conversion and funnel reports, available for each variant 
    • Access to a library with 300+ conversion campaign templates
    • Apply progressive visitor profiling to dynamically adjust user experiences 

    Cons

    • Each plan covers only $10K views. Each extra 10k costs another $20/mo 
    • Only one website allowed per account (except for Teams plan) 
    • Doesn’t support experiments in mobile app 
    • Not all CRO features are available on a Pro plan. 

    Price : Access to CRO features costs from $300/month on a Pro plan. Subscription costs also increase, based on the total number of monthly views. 

    Google Optimize vs CovertFlow : The Verdict 

    ConvertFlow is equally convenient to use in conjunction with Google Analytics as Google Optimize is. But the similarities end up here since ConvertFlow combines funnel design features with CRO tools. 

    With ConvertFlow, you can run more advanced experiments and apply more targeting criteria than with Google Optimize. You can observe user behaviour and conversion rates across multi-step CTA forms and page funnels, plus benefit from first-touch attribution reporting without switching apps. 

    Though CovertFlow has a free plan, it doesn’t include access to CRO features. Meaning it’s not a free alternative to Google Optimize.

    Comparison of the Top 5 Google Optimize Alternatives

    FeatureGoogle OptimizeAdobe TargetMatomo A/B testOptimizely VWOConvertFlow

    Supported channelsWebWeb, mobile, social media, email Web, mobile, email, digital campaignsWebsites & mobile appsWebsites, web and mobile appsWebsites and mobile apps
    A/B testingcheck mark iconcheck mark iconcheck mark iconcheck mark iconcheck mark iconcheck mark icon
    Easy GA integration check mark iconXcheck mark iconcheck mark iconcheck mark iconcheck mark icon
    Integrations with other web analytics appsXXcheck mark iconcheck mark iconXcheck mark icon
    Audience segmentationBasicAdvancedAdvancedAdvancedAdvancedAdvanced
    Geo-targetingcheck mark iconcheck mark iconXcheck mark iconcheck mark iconcheck mark icon
    Behavioural targetingBasicAdvancedAdvancedAdvancedAdvancedAdvanced
    HeatmapsXXcheck mark icon

    No extra cost with Matomo Cloud
    〰️

    *via integrations
    〰️

    *requires another subscription
    X
    Session recordingsXXcheck mark icon

    No extra cost with Matomo Cloud
    X〰️

    *requires another subscription
    X
    Multivariate testing (MVT)check mark iconcheck mark iconcheck mark iconcheck mark iconcheck mark iconcheck mark icon
    Dynamic personalisation Xcheck mark iconXcheck mark icon〰️

    *only on higher account tiers
    〰️

    *only on the highest account tiers
    Product recommendationsXcheck mark iconX〰️

    *requires another subscription
    〰️

    *requires another subscription
    check mark icon
    SupportSelf-help desk on a free tierEmail, live-chat, phone supportEmail, self-help guides and user forumKnowledge base, online tickets, user communitySelf-help guides, email, phoneKnowledge base, email, and live chat support
    PriceFreemiumOn-demandFrom €19 for Cloud subscription

    From €199/year as plugin for On-Premise
    On-demandFreemium

    From $365/mo
    From $300/month

    Conclusion 

    Google Optimize has served marketers well for over five years. But as the company decided to move on — so should you. 

    Oher A/B testing tools like Matomo, Optimizely or VWO offer better funnel analytics and split testing capabilities without any usage caps. Also, tools like Adobe Target, Optimizely, and VWO offer advanced content personalisation, based on aggregate analytics. However, they also come with much higher subscription costs.

    Matomo is a robust, compliant and cost-effective alternative to Google Optimize. Our tool allows you to schedule campaigns across all digital mediums (and even desktop apps !) without a

  • Audio & Video not synchronized properly if i merged more videos in mp4parser

    1er octobre 2013, par maniya

    I have used mp4parser for merging video with dynamic pause and record video capture for max 6 second recording. In preview its working fine when recorded video with minimum pause/record, If i tried with more than 3 pause/record mean the last video file not get merged properly with audio.At the start of the video the sync is ok but at the end the video hanged and audio playing in screen for the remaining file duration about 1sec.

    My Recording manager

    public class RecordingManager implements Camera.ErrorCallback, MediaRecorder.OnErrorListener, MediaRecorder.OnInfoListener {

       private static final String TAG = RecordingManager.class.getSimpleName();
       private static final int FOCUS_AREA_RADIUS = 32;
       private static final int FOCUS_MAX_VALUE = 1000;
       private static final int FOCUS_MIN_VALUE = -1000;
       private static final long MINIMUM_RECORDING_TIME = 2000;
       private static final int MAXIMUM_RECORDING_TIME = 70 * 1000;
       private static final long LOW_STORAGE_THRESHOLD = 5 * 1024 * 1024;
       private static final long RECORDING_FILE_LIMIT = 100 * 1024 * 1024;

       private boolean paused = true;

       private MediaRecorder mediaRecorder = null;
       private boolean recording = false;

       private FrameLayout previewFrame = null;

       private boolean mPreviewing = false;

    //    private TextureView mTextureView = null;
    //    private SurfaceTexture mSurfaceTexture = null;
    //    private boolean mSurfaceTextureReady = false;
    //
       private SurfaceView surfaceView = null;
       private SurfaceHolder surfaceHolder = null;
       private boolean surfaceViewReady = false;

       private Camera camera = null;
       private Camera.Parameters cameraParameters = null;
       private CamcorderProfile camcorderProfile = null;

       private int mOrientation = -1;
       private OrientationEventListener mOrientationEventListener = null;

       private long mStartRecordingTime;
       private int mVideoWidth;
       private int mVideoHeight;
       private long mStorageSpace;

       private Handler mHandler = new Handler();
    //    private Runnable mUpdateRecordingTimeTask = new Runnable() {
    //        @Override
    //        public void run() {
    //            long recordingTime = System.currentTimeMillis() - mStartRecordingTime;
    //            Log.d(TAG, String.format("Recording time:%d", recordingTime));
    //            mHandler.postDelayed(this, CLIP_GRAPH_UPDATE_INTERVAL);
    //        }
    //    };
       private Runnable mStopRecordingTask = new Runnable() {
           @Override
           public void run() {
               stopRecording();
           }
       };

       private static RecordingManager mInstance = null;
       private Activity currentActivity = null;
       private String destinationFilepath = "";
       private String snapshotFilepath = "";

       public static RecordingManager getInstance(Activity activity, FrameLayout previewFrame) {
           if (mInstance == null || mInstance.currentActivity != activity) {
               mInstance = new RecordingManager(activity, previewFrame);
           }
           return mInstance;
       }

       private RecordingManager(Activity activity, FrameLayout previewFrame) {
           currentActivity = activity;
           this.previewFrame = previewFrame;
       }

       public int getVideoWidth() {
           return this.mVideoWidth;
       }
       public int getVideoHeight() {
           return this.mVideoHeight;
       }
       public void setDestinationFilepath(String filepath) {
           this.destinationFilepath = filepath;
       }
       public String getDestinationFilepath() {
           return this.destinationFilepath;
       }
       public void setSnapshotFilepath(String filepath) {
           this.snapshotFilepath = filepath;
       }
       public String getSnapshotFilepath() {
           return this.snapshotFilepath;
       }
       public void init(String videoPath, String snapshotPath) {
           Log.v(TAG, "init.");
           setDestinationFilepath(videoPath);
           setSnapshotFilepath(snapshotPath);
           if (!Utils.isExternalStorageAvailable()) {
               showStorageErrorAndFinish();
               return;
           }

           openCamera();
           if (camera == null) {
               showCameraErrorAndFinish();
               return;
           }



       public void onResume() {
           Log.v(TAG, "onResume.");
           paused = false;

           // Open the camera
           if (camera == null) {
               openCamera();
               if (camera == null) {
                   showCameraErrorAndFinish();
                   return;
               }
           }

           // Initialize the surface texture or surface view
    //        if (useTexture() && mTextureView == null) {
    //            initTextureView();
    //            mTextureView.setVisibility(View.VISIBLE);
    //        } else if (!useTexture() && mSurfaceView == null) {
               initSurfaceView();
               surfaceView.setVisibility(View.VISIBLE);
    //        }

           // Start the preview
           if (!mPreviewing) {
               startPreview();
           }
       }

       private void openCamera() {
           Log.v(TAG, "openCamera");
           try {
               camera = Camera.open();
               camera.setErrorCallback(this);
               camera.setDisplayOrientation(90); // Since we only support portrait mode
               cameraParameters = camera.getParameters();
           } catch (RuntimeException e) {
               e.printStackTrace();
               camera = null;
           }
       }

       private void closeCamera() {
           Log.v(TAG, "closeCamera");
           if (camera == null) {
               Log.d(TAG, "Already stopped.");
               return;
           }

           camera.setErrorCallback(null);
           if (mPreviewing) {
               stopPreview();
           }
           camera.release();
           camera = null;
       }




       private void initSurfaceView() {
           surfaceView = new SurfaceView(currentActivity);
           surfaceView.getHolder().addCallback(new SurfaceViewCallback());
           surfaceView.setVisibility(View.GONE);
           FrameLayout.LayoutParams params = new LayoutParams(
                   LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT, Gravity.CENTER);
           surfaceView.setLayoutParams(params);
           Log.d(TAG, "add surface view to preview frame");
           previewFrame.addView(surfaceView);
       }

       private void releaseSurfaceView() {
           if (surfaceView != null) {
               previewFrame.removeAllViews();
               surfaceView = null;
               surfaceHolder = null;
               surfaceViewReady = false;
           }
       }

       private void startPreview() {
    //        if ((useTexture() && !mSurfaceTextureReady) || (!useTexture() && !mSurfaceViewReady)) {
    //            return;
    //        }

           Log.v(TAG, "startPreview.");
           if (mPreviewing) {
               stopPreview();
           }

           setCameraParameters();
           resizePreview();

           try {
    //            if (useTexture()) {
    //                mCamera.setPreviewTexture(mSurfaceTexture);
    //            } else {
                   camera.setPreviewDisplay(surfaceHolder);
    //            }
               camera.startPreview();
               mPreviewing = true;
           } catch (Exception e) {
               closeCamera();
               e.printStackTrace();
               Log.e(TAG, "startPreview failed.");
           }

       }

       private void stopPreview() {
           Log.v(TAG, "stopPreview");
           if (camera != null) {
               camera.stopPreview();
               mPreviewing = false;
           }
       }

       public void onPause() {
           paused = true;

           if (recording) {
               stopRecording();
           }
           closeCamera();

    //        if (useTexture()) {
    //            releaseSurfaceTexture();
    //        } else {
               releaseSurfaceView();
    //        }
       }

       private void setCameraParameters() {
           if (CamcorderProfile.hasProfile(CamcorderProfile.QUALITY_720P)) {
               camcorderProfile = CamcorderProfile.get(CamcorderProfile.QUALITY_720P);
           } else if (CamcorderProfile.hasProfile(CamcorderProfile.QUALITY_480P)) {
               camcorderProfile = CamcorderProfile.get(CamcorderProfile.QUALITY_480P);
           } else {
               camcorderProfile = CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH);
           }
           mVideoWidth = camcorderProfile.videoFrameWidth;
           mVideoHeight = camcorderProfile.videoFrameHeight;
           camcorderProfile.fileFormat = MediaRecorder.OutputFormat.MPEG_4;
           camcorderProfile.videoFrameRate = 30;

           Log.v(TAG, "mVideoWidth=" + mVideoWidth + " mVideoHeight=" + mVideoHeight);
           cameraParameters.setPreviewSize(mVideoWidth, mVideoHeight);

           if (cameraParameters.getSupportedWhiteBalance().contains(Camera.Parameters.WHITE_BALANCE_AUTO)) {
               cameraParameters.setWhiteBalance(Camera.Parameters.WHITE_BALANCE_AUTO);
           }

           if (cameraParameters.getSupportedFocusModes().contains(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO)) {
               cameraParameters.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_VIDEO);
           }

           cameraParameters.setRecordingHint(true);
           cameraParameters.set("cam_mode", 1);

           camera.setParameters(cameraParameters);
           cameraParameters = camera.getParameters();

           camera.setDisplayOrientation(90);
           android.hardware.Camera.CameraInfo info = new android.hardware.Camera.CameraInfo();
           Log.d(TAG, info.orientation + " degree");
       }

       private void resizePreview() {
           Log.d(TAG, String.format("Video size:%d|%d", mVideoWidth, mVideoHeight));

           Point optimizedSize = getOptimizedPreviewSize(mVideoWidth, mVideoHeight);
           Log.d(TAG, String.format("Optimized size:%d|%d", optimizedSize.x, optimizedSize.y));

           ViewGroup.LayoutParams params = (ViewGroup.LayoutParams) previewFrame.getLayoutParams();
           params.width = optimizedSize.x;
           params.height = optimizedSize.y;
           previewFrame.setLayoutParams(params);
       }

       public void setOrientation(int ori) {
           this.mOrientation = ori;
       }

       public void setOrientationEventListener(OrientationEventListener listener) {
           this.mOrientationEventListener = listener;
       }

       public Camera getCamera() {
           return camera;
       }

       @SuppressWarnings("serial")
       public void setFocusArea(float x, float y) {
           if (camera != null) {
               int viewWidth = surfaceView.getWidth();
               int viewHeight = surfaceView.getHeight();

               int focusCenterX = FOCUS_MAX_VALUE - (int) (x / viewWidth * (FOCUS_MAX_VALUE - FOCUS_MIN_VALUE));
               int focusCenterY = FOCUS_MIN_VALUE + (int) (y / viewHeight * (FOCUS_MAX_VALUE - FOCUS_MIN_VALUE));
               final int left = focusCenterY - FOCUS_AREA_RADIUS < FOCUS_MIN_VALUE ? FOCUS_MIN_VALUE : focusCenterY - FOCUS_AREA_RADIUS;
               final int top = focusCenterX - FOCUS_AREA_RADIUS < FOCUS_MIN_VALUE ? FOCUS_MIN_VALUE : focusCenterX - FOCUS_AREA_RADIUS;
               final int right = focusCenterY + FOCUS_AREA_RADIUS > FOCUS_MAX_VALUE ? FOCUS_MAX_VALUE : focusCenterY + FOCUS_AREA_RADIUS;
               final int bottom = focusCenterX + FOCUS_AREA_RADIUS > FOCUS_MAX_VALUE ? FOCUS_MAX_VALUE : focusCenterX + FOCUS_AREA_RADIUS;

               Camera.Parameters params = camera.getParameters();
               params.setFocusAreas(new ArrayList() {
                   {
                       add(new Camera.Area(new Rect(left, top, right, bottom), 1000));
                   }
               });
               camera.setParameters(params);
               camera.autoFocus(new AutoFocusCallback() {
                   @Override
                   public void onAutoFocus(boolean success, Camera camera) {
                       Log.d(TAG, "onAutoFocus");
                   }
               });
           }
       }

       public void startRecording(String destinationFilepath) {
           if (!recording) {
               updateStorageSpace();
               setDestinationFilepath(destinationFilepath);
               if (mStorageSpace <= LOW_STORAGE_THRESHOLD) {
                   Log.v(TAG, "Storage issue, ignore the start request");
                   Toast.makeText(currentActivity, "Storage issue, ignore the recording request", Toast.LENGTH_LONG).show();
                   return;
               }

               if (!prepareMediaRecorder()) {
                   Toast.makeText(currentActivity, "prepareMediaRecorder failed.", Toast.LENGTH_LONG).show();
                   return;
               }

               Log.d(TAG, "Successfully prepare media recorder.");
               try {
                   mediaRecorder.start();
               } catch (RuntimeException e) {
                   Log.e(TAG, "MediaRecorder start failed.");
                   releaseMediaRecorder();
                   return;
               }

               mStartRecordingTime = System.currentTimeMillis();

               if (mOrientationEventListener != null) {
                   mOrientationEventListener.disable();
               }

               recording = true;
           }
       }

       public void stopRecording() {
           if (recording) {
               if (!paused) {
                   // Capture at least 1 second video
                   long currentTime = System.currentTimeMillis();
                   if (currentTime - mStartRecordingTime < MINIMUM_RECORDING_TIME) {
                       mHandler.postDelayed(mStopRecordingTask, MINIMUM_RECORDING_TIME - (currentTime - mStartRecordingTime));
                       return;
                   }
               }

               if (mOrientationEventListener != null) {
                   mOrientationEventListener.enable();
               }

    //            mHandler.removeCallbacks(mUpdateRecordingTimeTask);

               try {
                   mediaRecorder.setOnErrorListener(null);
                   mediaRecorder.setOnInfoListener(null);
                   mediaRecorder.stop(); // stop the recording
                   Toast.makeText(currentActivity, "Video file saved.", Toast.LENGTH_LONG).show();

                   long stopRecordingTime = System.currentTimeMillis();
                   Log.d(TAG, String.format("stopRecording. file:%s duration:%d", destinationFilepath, stopRecordingTime - mStartRecordingTime));

                   // Calculate the duration of video
                   MediaMetadataRetriever mmr = new MediaMetadataRetriever();
                   mmr.setDataSource(this.destinationFilepath);
                   String _length = mmr.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION);
                   if (_length != null) {
                       Log.d(TAG, String.format("clip duration:%d", Long.parseLong(_length)));
                   }

                   // Taking the snapshot of video
                   Bitmap snapshot = ThumbnailUtils.createVideoThumbnail(this.destinationFilepath, Thumbnails.MICRO_KIND);
                   try {
                       FileOutputStream out = new FileOutputStream(this.snapshotFilepath);
                       snapshot.compress(Bitmap.CompressFormat.JPEG, 70, out);
                       out.close();
                   } catch (Exception e) {
                       e.printStackTrace();
                   }

    //                mActivity.showPlayButton();

               } catch (RuntimeException e) {
                   e.printStackTrace();
                   Log.e(TAG, e.getMessage());
                   // if no valid audio/video data has been received when stop() is
                   // called
               } finally {
    //          

                   releaseMediaRecorder(); // release the MediaRecorder object
                   if (!paused) {
                       cameraParameters = camera.getParameters();
                   }
                   recording = false;
               }

           }
       }

       public void setRecorderOrientation(int orientation) {
           // For back camera only
           if (orientation != -1) {
               Log.d(TAG, "set orientationHint:" + (orientation + 135) % 360 / 90 * 90);
               mediaRecorder.setOrientationHint((orientation + 135) % 360 / 90 * 90);
           }else {
               Log.d(TAG, "not set orientationHint to mediaRecorder");
           }
       }

       private boolean prepareMediaRecorder() {
           mediaRecorder = new MediaRecorder();

           camera.unlock();
           mediaRecorder.setCamera(camera);

           mediaRecorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER);
           mediaRecorder.setVideoSource(MediaRecorder.VideoSource.CAMERA);

           mediaRecorder.setProfile(camcorderProfile);

           mediaRecorder.setMaxDuration(MAXIMUM_RECORDING_TIME);
           mediaRecorder.setOutputFile(this.destinationFilepath);

           try {
               mediaRecorder.setMaxFileSize(Math.min(RECORDING_FILE_LIMIT, mStorageSpace - LOW_STORAGE_THRESHOLD));
           } catch (RuntimeException exception) {
           }

           setRecorderOrientation(mOrientation);

           if (!useTexture()) {
               mediaRecorder.setPreviewDisplay(surfaceHolder.getSurface());
           }

           try {
               mediaRecorder.prepare();
           } catch (IllegalStateException e) {
               releaseMediaRecorder();
               return false;
           } catch (IOException e) {
               releaseMediaRecorder();
               return false;
           }

           mediaRecorder.setOnErrorListener(this);
           mediaRecorder.setOnInfoListener(this);

           return true;

       }

       private void releaseMediaRecorder() {
           if (mediaRecorder != null) {
               mediaRecorder.reset(); // clear recorder configuration
               mediaRecorder.release(); // release the recorder object
               mediaRecorder = null;
               camera.lock(); // lock camera for later use
           }
       }

       private Point getOptimizedPreviewSize(int videoWidth, int videoHeight) {
           Display display = currentActivity.getWindowManager().getDefaultDisplay();
           Point size = new Point();
           display.getSize(size);

           Point optimizedSize = new Point();
           optimizedSize.x = size.x;
           optimizedSize.y = (int) ((float) videoWidth / (float) videoHeight * size.x);

           return optimizedSize;
       }

       private void showCameraErrorAndFinish() {
           DialogInterface.OnClickListener buttonListener = new DialogInterface.OnClickListener() {
               @Override
               public void onClick(DialogInterface dialog, int which) {
                   currentActivity.finish();
               }
           };
           new AlertDialog.Builder(currentActivity).setCancelable(false)
                   .setTitle("Camera error")
                   .setMessage("Cannot connect to the camera.")
                   .setNeutralButton("OK", buttonListener)
                   .show();
       }

       private void showStorageErrorAndFinish() {
           DialogInterface.OnClickListener buttonListener = new DialogInterface.OnClickListener() {
               @Override
               public void onClick(DialogInterface dialog, int which) {
                   currentActivity.finish();
               }
           };
           new AlertDialog.Builder(currentActivity).setCancelable(false)
                   .setTitle("Storage error")
                   .setMessage("Cannot read external storage.")
                   .setNeutralButton("OK", buttonListener)
                   .show();
       }

       private void updateStorageSpace() {
           mStorageSpace = getAvailableSpace();
           Log.v(TAG, "updateStorageSpace mStorageSpace=" + mStorageSpace);
       }

       private long getAvailableSpace() {
           String state = Environment.getExternalStorageState();
           Log.d(TAG, "External storage state=" + state);
           if (Environment.MEDIA_CHECKING.equals(state)) {
               return -1;
           }
           if (!Environment.MEDIA_MOUNTED.equals(state)) {
               return -1;
           }

           File directory = currentActivity.getExternalFilesDir("vine");
           directory.mkdirs();
           if (!directory.isDirectory() || !directory.canWrite()) {
               return -1;
           }

           try {
               StatFs stat = new StatFs(directory.getAbsolutePath());
               return stat.getAvailableBlocks() * (long) stat.getBlockSize();
           } catch (Exception e) {
               Log.i(TAG, "Fail to access external storage", e);
           }
           return -1;
       }

       private boolean useTexture() {
           return false;
    //        return Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1;
       }

       private class SurfaceViewCallback implements SurfaceHolder.Callback {

           @Override
           public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
               Log.v(TAG, "surfaceChanged. width=" + width + ". height=" + height);
           }

           @Override
           public void surfaceCreated(SurfaceHolder holder) {
               Log.v(TAG, "surfaceCreated");
               surfaceViewReady = true;
               surfaceHolder = holder;
               startPreview();
           }

           @Override
           public void surfaceDestroyed(SurfaceHolder holder) {
               Log.d(TAG, "surfaceDestroyed");
               surfaceViewReady = false;
           }

       }

       @Override
       public void onError(int error, Camera camera) {
           Log.e(TAG, "Camera onError. what=" + error + ".");
           if (error == Camera.CAMERA_ERROR_SERVER_DIED) {

           } else if (error == Camera.CAMERA_ERROR_UNKNOWN) {

           }
       }

       @Override
       public void onInfo(MediaRecorder mr, int what, int extra) {
           if (what == MediaRecorder.MEDIA_RECORDER_INFO_MAX_DURATION_REACHED) {
               stopRecording();
           } else if (what == MediaRecorder.MEDIA_RECORDER_INFO_MAX_FILESIZE_REACHED) {
               stopRecording();
               Toast.makeText(currentActivity, "Size limit reached", Toast.LENGTH_LONG).show();
           }
       }

       @Override
       public void onError(MediaRecorder mr, int what, int extra) {
           Log.e(TAG, "MediaRecorder onError. what=" + what + ". extra=" + extra);
           if (what == MediaRecorder.MEDIA_RECORDER_ERROR_UNKNOWN) {
               stopRecording();
           }
       }

    }

    VideoUtils

    public class VideoUtils {
       private static final String TAG = VideoUtils.class.getSimpleName();

       static double[] matrix = new double[] { 0.0, 1.0, 0.0, -1.0, 0.0, 0.0, 0.0,
               0.0, 1.0 };

       public static boolean MergeFiles(String speratedDirPath,
               String targetFileName) {
           File videoSourceDirFile = new File(speratedDirPath);
           String[] videoList = videoSourceDirFile.list();
           List<track> videoTracks = new LinkedList<track>();
           List<track> audioTracks = new LinkedList<track>();
           for (String file : videoList) {
               Log.d(TAG, "source files" + speratedDirPath
                       + File.separator + file);
               try {
                   FileChannel fc = new FileInputStream(speratedDirPath
                           + File.separator + file).getChannel();
                   Movie movie = MovieCreator.build(fc);
                   for (Track t : movie.getTracks()) {
                       if (t.getHandler().equals("soun")) {
                           audioTracks.add(t);
                       }
                       if (t.getHandler().equals("vide")) {

                           videoTracks.add(t);
                       }
                   }
               } catch (FileNotFoundException e) {
                   e.printStackTrace();
                   return false;
               } catch (IOException e) {
                   e.printStackTrace();
                   return false;
               }
           }

           Movie result = new Movie();

           try {
               if (audioTracks.size() > 0) {
                   result.addTrack(new AppendTrack(audioTracks
                           .toArray(new Track[audioTracks.size()])));
               }
               if (videoTracks.size() > 0) {
                   result.addTrack(new AppendTrack(videoTracks
                           .toArray(new Track[videoTracks.size()])));
               }
               IsoFile out = new DefaultMp4Builder().build(result);



               FileChannel fc = new RandomAccessFile(
                       String.format(targetFileName), "rw").getChannel();

               Log.d(TAG, "target file:" + targetFileName);
               TrackBox tb = out.getMovieBox().getBoxes(TrackBox.class).get(1);

               TrackHeaderBox tkhd = tb.getTrackHeaderBox();
               double[] b = tb.getTrackHeaderBox().getMatrix();

               tkhd.setMatrix(matrix);

               fc.position(0);
               out.getBox(fc);
               fc.close();
               for (String file : videoList) {
                   File TBRFile = new File(speratedDirPath + File.separator + file);
                   TBRFile.delete();
               }
               boolean a = videoSourceDirFile.delete();
               Log.d(TAG, "try to delete dir:" + a);
           } catch (IOException e) {
               // TODO Auto-generated catch block
               e.printStackTrace();
               return false;
           }

           return true;
       }

       public static boolean clearFiles(String speratedDirPath) {
           File videoSourceDirFile = new File(speratedDirPath);
           if (videoSourceDirFile != null
                   &amp;&amp; videoSourceDirFile.listFiles() != null) {
               File[] videoList = videoSourceDirFile.listFiles();
               for (File video : videoList) {
                   video.delete();
               }
               videoSourceDirFile.delete();
           }
           return true;
       }

       public static int createSnapshot(String videoFile, int kind, String snapshotFilepath) {
           return 0;
       };

       public static int createSnapshot(String videoFile, int width, int height, String snapshotFilepath) {
           return 0;
       }
    }
    </track></track></track></track>

    my reference code project link is

    https://github.com/jwfing/AndroidVideoKit

  • audio do not stop recording after pause ffmpeg c++

    15 septembre 2021, par C1ngh10

    I am developing an application that record the screen and the audio from microphone. I implemented the pause function stopping video and audio thread on a condition variable, resuming them with a notify on the same condition variable. This is done in captureAudio(), in the main while. In this way works on macOS and linux, where I use avfoudation and alsa respectively, but on windows, with dshow, keep recording audio during the pause, when the thread is waiting on the condition variable. Does anybody know how can I fix this behaviour ?

    &#xA;

    #include "ScreenRecorder.h"&#xA;&#xA;using namespace std;&#xA;&#xA;ScreenRecorder::ScreenRecorder() : pauseCapture(false), stopCapture(false), started(false), activeMenu(true) {&#xA;    avcodec_register_all();&#xA;    avdevice_register_all();&#xA;&#xA;    width = 1920;&#xA;    height = 1200;&#xA;}&#xA;&#xA;ScreenRecorder::~ScreenRecorder() {&#xA;&#xA;    if (started) {&#xA;        value = av_write_trailer(outAVFormatContext);&#xA;        if (value &lt; 0) {&#xA;            cerr &lt;&lt; "Error in writing av trailer" &lt;&lt; endl;&#xA;            exit(-1);&#xA;        }&#xA;&#xA;        avformat_close_input(&amp;inAudioFormatContext);&#xA;        if(inAudioFormatContext == nullptr){&#xA;            cout &lt;&lt; "inAudioFormatContext close successfully" &lt;&lt; endl;&#xA;        }&#xA;        else{&#xA;            cerr &lt;&lt; "Error: unable to close the inAudioFormatContext" &lt;&lt; endl;&#xA;            exit(-1);&#xA;            //throw "Error: unable to close the file";&#xA;        }&#xA;        avformat_free_context(inAudioFormatContext);&#xA;        if(inAudioFormatContext == nullptr){&#xA;            cout &lt;&lt; "AudioFormat freed successfully" &lt;&lt; endl;&#xA;        }&#xA;        else{&#xA;            cerr &lt;&lt; "Error: unable to free AudioFormatContext" &lt;&lt; endl;&#xA;            exit(-1);&#xA;        }&#xA;        &#xA;        avformat_close_input(&amp;pAVFormatContext);&#xA;        if (pAVFormatContext == nullptr) {&#xA;            cout &lt;&lt; "File close successfully" &lt;&lt; endl;&#xA;        }&#xA;        else {&#xA;            cerr &lt;&lt; "Error: unable to close the file" &lt;&lt; endl;&#xA;            exit(-1);&#xA;            //throw "Error: unable to close the file";&#xA;        }&#xA;&#xA;        avformat_free_context(pAVFormatContext);&#xA;        if (pAVFormatContext == nullptr) {&#xA;            cout &lt;&lt; "VideoFormat freed successfully" &lt;&lt; endl;&#xA;        }&#xA;        else {&#xA;            cerr &lt;&lt; "Error: unable to free VideoFormatContext" &lt;&lt; endl;&#xA;            exit(-1);&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;/*==================================== VIDEO ==============================*/&#xA;&#xA;int ScreenRecorder::openVideoDevice() throw() {&#xA;    value = 0;&#xA;    options = nullptr;&#xA;    pAVFormatContext = nullptr;&#xA;&#xA;    pAVFormatContext = avformat_alloc_context();&#xA;&#xA;    string dimension = to_string(width) &#x2B; "x" &#x2B; to_string(height);&#xA;    av_dict_set(&amp;options, "video_size", dimension.c_str(), 0);   //option to set the dimension of the screen section to record&#xA;&#xA;#ifdef _WIN32&#xA;    pAVInputFormat = av_find_input_format("gdigrab");&#xA;    if (avformat_open_input(&amp;pAVFormatContext, "desktop", pAVInputFormat, &amp;options) != 0) {&#xA;        cerr &lt;&lt; "Couldn&#x27;t open input stream" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;#elif defined linux&#xA;   &#xA;    int offset_x = 0, offset_y = 0;&#xA;    string url = ":0.0&#x2B;" &#x2B; to_string(offset_x) &#x2B; "," &#x2B; to_string(offset_y);  //custom string to set the start point of the screen section&#xA;    pAVInputFormat = av_find_input_format("x11grab");&#xA;    value = avformat_open_input(&amp;pAVFormatContext, url.c_str(), pAVInputFormat, &amp;options);&#xA;&#xA;    if (value != 0) {&#xA;        cerr &lt;&lt; "Error in opening input device (video)" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;#else&#xA;&#xA;    value = av_dict_set(&amp;options, "pixel_format", "0rgb", 0);&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error in setting pixel format" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    value = av_dict_set(&amp;options, "video_device_index", "1", 0);&#xA;&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error in setting video device index" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    pAVInputFormat = av_find_input_format("avfoundation");&#xA;&#xA;    if (avformat_open_input(&amp;pAVFormatContext, "Capture screen 0:none", pAVInputFormat, &amp;options) != 0) {  //TODO trovare un modo per selezionare sempre lo schermo (forse "Capture screen 0")&#xA;        cerr &lt;&lt; "Error in opening input device" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;&#xA;&#xA;#endif&#xA;    //set frame per second&#xA;&#xA;    value = av_dict_set(&amp;options, "framerate", "30", 0);&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error in setting dictionary value (setting framerate)" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    value = av_dict_set(&amp;options, "preset", "medium", 0);&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error in setting dictionary value (setting preset value)" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;    /*&#xA;    value = av_dict_set(&amp;options, "vsync", "1", 0);&#xA;    if(value &lt; 0){&#xA;        cerr &lt;&lt; "Error in setting dictionary value (setting vsync value)" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;    */&#xA;&#xA;    value = av_dict_set(&amp;options, "probesize", "60M", 0);&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error in setting probesize value" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    //get video stream infos from context&#xA;    value = avformat_find_stream_info(pAVFormatContext, nullptr);&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error in retrieving the stream info" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    VideoStreamIndx = -1;&#xA;    for (int i = 0; i &lt; pAVFormatContext->nb_streams; i&#x2B;&#x2B;) {&#xA;        if (pAVFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {&#xA;            VideoStreamIndx = i;&#xA;            break;&#xA;        }&#xA;    }&#xA;    if (VideoStreamIndx == -1) {&#xA;        cerr &lt;&lt; "Error: unable to find video stream index" &lt;&lt; endl;&#xA;        exit(-2);&#xA;    }&#xA;&#xA;    pAVCodecContext = pAVFormatContext->streams[VideoStreamIndx]->codec;&#xA;    pAVCodec = avcodec_find_decoder(pAVCodecContext->codec_id/*params->codec_id*/);&#xA;    if (pAVCodec == nullptr) {&#xA;        cerr &lt;&lt; "Error: unable to find decoder video" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    cout &lt;&lt; "Insert height and width [h w]: ";   //custom screen dimension to record&#xA;    cin >> h >> w;*/&#xA;&#xA;&#xA;    return 0;&#xA;}&#xA;&#xA;/*==========================================  AUDIO  ============================*/&#xA;&#xA;int ScreenRecorder::openAudioDevice() {&#xA;    audioOptions = nullptr;&#xA;    inAudioFormatContext = nullptr;&#xA;&#xA;    inAudioFormatContext = avformat_alloc_context();&#xA;    value = av_dict_set(&amp;audioOptions, "sample_rate", "44100", 0);&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error: cannot set audio sample rate" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;    value = av_dict_set(&amp;audioOptions, "async", "1", 0);&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error: cannot set audio sample rate" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;#if defined linux&#xA;    audioInputFormat = av_find_input_format("alsa");&#xA;    value = avformat_open_input(&amp;inAudioFormatContext, "hw:0", audioInputFormat, &amp;audioOptions);&#xA;    if (value != 0) {&#xA;        cerr &lt;&lt; "Error in opening input device (audio)" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;#endif&#xA;&#xA;#if defined _WIN32&#xA;    audioInputFormat = av_find_input_format("dshow");&#xA;    value = avformat_open_input(&amp;inAudioFormatContext, "audio=Microfono (Realtek(R) Audio)", audioInputFormat, &amp;audioOptions);&#xA;    if (value != 0) {&#xA;        cerr &lt;&lt; "Error in opening input device (audio)" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;#endif&#xA;&#xA;    value = avformat_find_stream_info(inAudioFormatContext, nullptr);&#xA;    if (value != 0) {&#xA;        cerr &lt;&lt; "Error: cannot find the audio stream information" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    audioStreamIndx = -1;&#xA;    for (int i = 0; i &lt; inAudioFormatContext->nb_streams; i&#x2B;&#x2B;) {&#xA;        if (inAudioFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {&#xA;            audioStreamIndx = i;&#xA;            break;&#xA;        }&#xA;    }&#xA;    if (audioStreamIndx == -1) {&#xA;        cerr &lt;&lt; "Error: unable to find audio stream index" &lt;&lt; endl;&#xA;        exit(-2);&#xA;    }&#xA;}&#xA;&#xA;int ScreenRecorder::initOutputFile() {&#xA;    value = 0;&#xA;&#xA;    outAVFormatContext = nullptr;&#xA;    outputAVFormat = av_guess_format(nullptr, "output.mp4", nullptr);&#xA;    if (outputAVFormat == nullptr) {&#xA;        cerr &lt;&lt; "Error in guessing the video format, try with correct format" &lt;&lt; endl;&#xA;        exit(-5);&#xA;    }&#xA;    avformat_alloc_output_context2(&amp;outAVFormatContext, outputAVFormat, outputAVFormat->name, "..\\media\\output.mp4");&#xA;    if (outAVFormatContext == nullptr) {&#xA;        cerr &lt;&lt; "Error in allocating outAVFormatContext" &lt;&lt; endl;&#xA;        exit(-4);&#xA;    }&#xA;&#xA;    /*===========================================================================*/&#xA;    this->generateVideoStream();&#xA;    this->generateAudioStream();&#xA;&#xA;    //create an empty video file&#xA;    if (!(outAVFormatContext->flags &amp; AVFMT_NOFILE)) {&#xA;        if (avio_open2(&amp;outAVFormatContext->pb, "..\\media\\output.mp4", AVIO_FLAG_WRITE, nullptr, nullptr) &lt; 0) {&#xA;            cerr &lt;&lt; "Error in creating the video file" &lt;&lt; endl;&#xA;            exit(-10);&#xA;        }&#xA;    }&#xA;&#xA;    if (outAVFormatContext->nb_streams == 0) {&#xA;        cerr &lt;&lt; "Output file does not contain any stream" &lt;&lt; endl;&#xA;        exit(-11);&#xA;    }&#xA;    value = avformat_write_header(outAVFormatContext, &amp;options);&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error in writing the header context" &lt;&lt; endl;&#xA;        exit(-12);&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;/*===================================  VIDEO  ==================================*/&#xA;&#xA;void ScreenRecorder::generateVideoStream() {&#xA;    //Generate video stream&#xA;    videoSt = avformat_new_stream(outAVFormatContext, nullptr);&#xA;    if (videoSt == nullptr) {&#xA;        cerr &lt;&lt; "Error in creating AVFormatStream" &lt;&lt; endl;&#xA;        exit(-6);&#xA;    }&#xA;&#xA;    outVideoCodec = avcodec_find_encoder(AV_CODEC_ID_MPEG4);  //AV_CODEC_ID_MPEG4&#xA;    if (outVideoCodec == nullptr) {&#xA;        cerr &lt;&lt; "Error in finding the AVCodec, try again with the correct codec" &lt;&lt; endl;&#xA;        exit(-8);&#xA;    }&#xA;avcodec_alloc_context3(outAVCodec)&#xA;    outVideoCodecContext = avcodec_alloc_context3(outVideoCodec);&#xA;    if (outVideoCodecContext == nullptr) {&#xA;        cerr &lt;&lt; "Error in allocating the codec context" &lt;&lt; endl;&#xA;        exit(-7);&#xA;    }&#xA;&#xA;    //set properties of the video file (stream)&#xA;    outVideoCodecContext = videoSt->codec;&#xA;    outVideoCodecContext->codec_id = AV_CODEC_ID_MPEG4;&#xA;    outVideoCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;&#xA;    outVideoCodecContext->pix_fmt = AV_PIX_FMT_YUV420P;&#xA;    outVideoCodecContext->bit_rate = 10000000;&#xA;    outVideoCodecContext->width = width;&#xA;    outVideoCodecContext->height = height;&#xA;    outVideoCodecContext->gop_size = 10;&#xA;    outVideoCodecContext->global_quality = 500;&#xA;    outVideoCodecContext->max_b_frames = 2;&#xA;    outVideoCodecContext->time_base.num = 1;&#xA;    outVideoCodecContext->time_base.den = 30;&#xA;    outVideoCodecContext->bit_rate_tolerance = 400000;&#xA;&#xA;    if (outVideoCodecContext->codec_id == AV_CODEC_ID_H264) {&#xA;        av_opt_set(outVideoCodecContext->priv_data, "preset", "slow", 0);&#xA;    }&#xA;&#xA;    if (outAVFormatContext->oformat->flags &amp; AVFMT_GLOBALHEADER) {&#xA;        outVideoCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    }&#xA;&#xA;    value = avcodec_open2(outVideoCodecContext, outVideoCodec, nullptr);&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error in opening the AVCodec" &lt;&lt; endl;&#xA;        exit(-9);&#xA;    }&#xA;&#xA;    outVideoStreamIndex = -1;&#xA;    for (int i = 0; i &lt; outAVFormatContext->nb_streams; i&#x2B;&#x2B;) {&#xA;        if (outAVFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_UNKNOWN) {&#xA;            outVideoStreamIndex = i;&#xA;        }&#xA;    }&#xA;    if (outVideoStreamIndex &lt; 0) {&#xA;        cerr &lt;&lt; "Error: cannot find a free stream index for video output" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;    avcodec_parameters_from_context(outAVFormatContext->streams[outVideoStreamIndex]->codecpar, outVideoCodecContext);&#xA;}&#xA;&#xA;/*===============================  AUDIO  ==================================*/&#xA;&#xA;void ScreenRecorder::generateAudioStream() {&#xA;    AVCodecParameters* params = inAudioFormatContext->streams[audioStreamIndx]->codecpar;&#xA;    inAudioCodec = avcodec_find_decoder(params->codec_id);&#xA;    if (inAudioCodec == nullptr) {&#xA;        cerr &lt;&lt; "Error: cannot find the audio decoder" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    inAudioCodecContext = avcodec_alloc_context3(inAudioCodec);&#xA;    if (avcodec_parameters_to_context(inAudioCodecContext, params) &lt; 0) {&#xA;        cout &lt;&lt; "Cannot create codec context for audio input" &lt;&lt; endl;&#xA;    }&#xA;&#xA;    value = avcodec_open2(inAudioCodecContext, inAudioCodec, nullptr);&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error: cannot open the input audio codec" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    //Generate audio stream&#xA;    outAudioCodecContext = nullptr;&#xA;    outAudioCodec = nullptr;&#xA;    int i;&#xA;&#xA;    AVStream* audio_st = avformat_new_stream(outAVFormatContext, nullptr);&#xA;    if (audio_st == nullptr) {&#xA;        cerr &lt;&lt; "Error: cannot create audio stream" &lt;&lt; endl;&#xA;        exit(1);&#xA;    }&#xA;&#xA;    outAudioCodec = avcodec_find_encoder(AV_CODEC_ID_AAC);&#xA;    if (outAudioCodec == nullptr) {&#xA;        cerr &lt;&lt; "Error: cannot find requested encoder" &lt;&lt; endl;&#xA;        exit(1);&#xA;    }&#xA;&#xA;    outAudioCodecContext = avcodec_alloc_context3(outAudioCodec);&#xA;    if (outAudioCodecContext == nullptr) {&#xA;        cerr &lt;&lt; "Error: cannot create related VideoCodecContext" &lt;&lt; endl;&#xA;        exit(1);&#xA;    }&#xA;&#xA;    if ((outAudioCodec)->supported_samplerates) {&#xA;        outAudioCodecContext->sample_rate = (outAudioCodec)->supported_samplerates[0];&#xA;        for (i = 0; (outAudioCodec)->supported_samplerates[i]; i&#x2B;&#x2B;) {&#xA;            if ((outAudioCodec)->supported_samplerates[i] == inAudioCodecContext->sample_rate)&#xA;                outAudioCodecContext->sample_rate = inAudioCodecContext->sample_rate;&#xA;        }&#xA;    }&#xA;    outAudioCodecContext->codec_id = AV_CODEC_ID_AAC;&#xA;    outAudioCodecContext->sample_fmt = (outAudioCodec)->sample_fmts ? (outAudioCodec)->sample_fmts[0] : AV_SAMPLE_FMT_FLTP;&#xA;    outAudioCodecContext->channels = inAudioCodecContext->channels;&#xA;    outAudioCodecContext->channel_layout = av_get_default_channel_layout(outAudioCodecContext->channels);&#xA;    outAudioCodecContext->bit_rate = 96000;&#xA;    outAudioCodecContext->time_base = { 1, inAudioCodecContext->sample_rate };&#xA;&#xA;    outAudioCodecContext->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;&#xA;&#xA;    if ((outAVFormatContext)->oformat->flags &amp; AVFMT_GLOBALHEADER) {&#xA;        outAudioCodecContext->flags |= AV_CODEC_FLAG_GLOBAL_HEADER;&#xA;    }&#xA;&#xA;    if (avcodec_open2(outAudioCodecContext, outAudioCodec, nullptr) &lt; 0) {&#xA;        cerr &lt;&lt; "error in opening the avcodec" &lt;&lt; endl;&#xA;        exit(1);&#xA;    }&#xA;&#xA;    //find a free stream index&#xA;    outAudioStreamIndex = -1;&#xA;    for (i = 0; i &lt; outAVFormatContext->nb_streams; i&#x2B;&#x2B;) {&#xA;        if (outAVFormatContext->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_UNKNOWN) {&#xA;            outAudioStreamIndex = i;&#xA;        }&#xA;    }&#xA;    if (outAudioStreamIndex &lt; 0) {&#xA;        cerr &lt;&lt; "Error: cannot find a free stream for audio on the output" &lt;&lt; endl;&#xA;        exit(1);&#xA;    }&#xA;&#xA;    avcodec_parameters_from_context(outAVFormatContext->streams[outAudioStreamIndex]->codecpar, outAudioCodecContext);&#xA;}&#xA;&#xA;int ScreenRecorder::init_fifo()&#xA;{&#xA;    /* Create the FIFO buffer based on the specified output sample format. */&#xA;    if (!(fifo = av_audio_fifo_alloc(outAudioCodecContext->sample_fmt,&#xA;        outAudioCodecContext->channels, 1))) {&#xA;        fprintf(stderr, "Could not allocate FIFO\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;int ScreenRecorder::add_samples_to_fifo(uint8_t** converted_input_samples, const int frame_size) {&#xA;    int error;&#xA;    /* Make the FIFO as large as it needs to be to hold both,&#xA;     * the old and the new samples. */&#xA;    if ((error = av_audio_fifo_realloc(fifo, av_audio_fifo_size(fifo) &#x2B; frame_size)) &lt; 0) {&#xA;        fprintf(stderr, "Could not reallocate FIFO\n");&#xA;        return error;&#xA;    }&#xA;    /* Store the new samples in the FIFO buffer. */&#xA;    if (av_audio_fifo_write(fifo, (void**)converted_input_samples, frame_size) &lt; frame_size) {&#xA;        fprintf(stderr, "Could not write data to FIFO\n");&#xA;        return AVERROR_EXIT;&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;int ScreenRecorder::initConvertedSamples(uint8_t*** converted_input_samples,&#xA;    AVCodecContext* output_codec_context,&#xA;    int frame_size) {&#xA;    int error;&#xA;    /* Allocate as many pointers as there are audio channels.&#xA;     * Each pointer will later point to the audio samples of the corresponding&#xA;     * channels (although it may be NULL for interleaved formats).&#xA;     */&#xA;    if (!(*converted_input_samples = (uint8_t**)calloc(output_codec_context->channels,&#xA;        sizeof(**converted_input_samples)))) {&#xA;        fprintf(stderr, "Could not allocate converted input sample pointers\n");&#xA;        return AVERROR(ENOMEM);&#xA;    }&#xA;    /* Allocate memory for the samples of all channels in one consecutive&#xA;     * block for convenience. */&#xA;    if (av_samples_alloc(*converted_input_samples, nullptr,&#xA;        output_codec_context->channels,&#xA;        frame_size,&#xA;        output_codec_context->sample_fmt, 0) &lt; 0) {&#xA;&#xA;        exit(1);&#xA;    }&#xA;    return 0;&#xA;}&#xA;&#xA;static int64_t pts = 0;&#xA;void ScreenRecorder::captureAudio() {&#xA;    int ret;&#xA;    AVPacket* inPacket, * outPacket;&#xA;    AVFrame* rawFrame, * scaledFrame;&#xA;    uint8_t** resampledData;&#xA;&#xA;    init_fifo();&#xA;&#xA;    //allocate space for a packet&#xA;    inPacket = (AVPacket*)av_malloc(sizeof(AVPacket));&#xA;    if (!inPacket) {&#xA;        cerr &lt;&lt; "Cannot allocate an AVPacket for encoded video" &lt;&lt; endl;&#xA;        exit(1);&#xA;    }&#xA;    av_init_packet(inPacket);&#xA;&#xA;    //allocate space for a packet&#xA;    rawFrame = av_frame_alloc();&#xA;    if (!rawFrame) {&#xA;        cerr &lt;&lt; "Cannot allocate an AVPacket for encoded video" &lt;&lt; endl;&#xA;        exit(1);&#xA;    }&#xA;&#xA;    scaledFrame = av_frame_alloc();&#xA;    if (!scaledFrame) {&#xA;        cerr &lt;&lt; "Cannot allocate an AVPacket for encoded video" &lt;&lt; endl;&#xA;        exit(1);&#xA;    }&#xA;&#xA;    outPacket = (AVPacket*)av_malloc(sizeof(AVPacket));&#xA;    if (!outPacket) {&#xA;        cerr &lt;&lt; "Cannot allocate an AVPacket for encoded video" &lt;&lt; endl;&#xA;        exit(1);&#xA;    }&#xA;&#xA;    //init the resampler&#xA;    SwrContext* resampleContext = nullptr;&#xA;    resampleContext = swr_alloc_set_opts(resampleContext,&#xA;        av_get_default_channel_layout(outAudioCodecContext->channels),&#xA;        outAudioCodecContext->sample_fmt,&#xA;        outAudioCodecContext->sample_rate,&#xA;        av_get_default_channel_layout(inAudioCodecContext->channels),&#xA;        inAudioCodecContext->sample_fmt,&#xA;        inAudioCodecContext->sample_rate,&#xA;        0,&#xA;        nullptr);&#xA;    if (!resampleContext) {&#xA;        cerr &lt;&lt; "Cannot allocate the resample context" &lt;&lt; endl;&#xA;        exit(1);&#xA;    }&#xA;    if ((swr_init(resampleContext)) &lt; 0) {&#xA;        fprintf(stderr, "Could not open resample context\n");&#xA;        swr_free(&amp;resampleContext);&#xA;        exit(1);&#xA;    }&#xA;&#xA;    while (true) {&#xA;        if (pauseCapture) {&#xA;            cout &lt;&lt; "Pause audio" &lt;&lt; endl;&#xA;        }&#xA;        cv.wait(ul, [this]() { return !pauseCapture; });&#xA;&#xA;        if (stopCapture) {&#xA;            break;&#xA;        }&#xA;&#xA;        ul.unlock();&#xA;&#xA;        if (av_read_frame(inAudioFormatContext, inPacket) >= 0 &amp;&amp; inPacket->stream_index == audioStreamIndx) {&#xA;            //decode audio routing&#xA;            av_packet_rescale_ts(outPacket, inAudioFormatContext->streams[audioStreamIndx]->time_base, inAudioCodecContext->time_base);&#xA;            if ((ret = avcodec_send_packet(inAudioCodecContext, inPacket)) &lt; 0) {&#xA;                cout &lt;&lt; "Cannot decode current audio packet " &lt;&lt; ret &lt;&lt; endl;&#xA;                continue;&#xA;            }&#xA;            &#xA;            while (ret >= 0) {&#xA;                ret = avcodec_receive_frame(inAudioCodecContext, rawFrame);&#xA;                if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                    break;&#xA;                else if (ret &lt; 0) {&#xA;                    cerr &lt;&lt; "Error during decoding" &lt;&lt; endl;&#xA;                    exit(1);&#xA;                }&#xA;                if (outAVFormatContext->streams[outAudioStreamIndex]->start_time &lt;= 0) {&#xA;                    outAVFormatContext->streams[outAudioStreamIndex]->start_time = rawFrame->pts;&#xA;                }&#xA;                initConvertedSamples(&amp;resampledData, outAudioCodecContext, rawFrame->nb_samples);&#xA;&#xA;                swr_convert(resampleContext,&#xA;                    resampledData, rawFrame->nb_samples,&#xA;                    (const uint8_t**)rawFrame->extended_data, rawFrame->nb_samp&#xA;&#xA;                add_samples_to_fifo(resampledData, rawFrame->nb_samples);&#xA;&#xA;                //raw frame ready&#xA;                av_init_packet(outPacket);&#xA;                outPacket->data = nullptr;&#xA;                outPacket->size = 0;&#xA;&#xA;                const int frame_size = FFMAX(av_audio_fifo_size(fifo), outAudioCodecContext->frame_size);&#xA;&#xA;                scaledFrame = av_frame_alloc();&#xA;                if (!scaledFrame) {&#xA;                    cerr &lt;&lt; "Cannot allocate an AVPacket for encoded video" &lt;&lt; endl;&#xA;                    exit(1);&#xA;                }&#xA;&#xA;                scaledFrame->nb_samples = outAudioCodecContext->frame_size;&#xA;                scaledFrame->channel_layout = outAudioCodecContext->channel_layout;&#xA;                scaledFrame->format = outAudioCodecContext->sample_fmt;&#xA;                scaledFrame->sample_rate = outAudioCodecContext->sample_rate;&#xA;                av_frame_get_buffer(scaledFrame, 0);&#xA;&#xA;                while (av_audio_fifo_size(fifo) >= outAudioCodecContext->frame_size) {&#xA;&#xA;                    ret = av_audio_fifo_read(fifo, (void**)(scaledFrame->data), outAudioCodecContext->frame_size);&#xA;                    scaledFrame->pts = pts;&#xA;                    pts &#x2B;= scaledFrame->nb_samples;&#xA;                    if (avcodec_send_frame(outAudioCodecContext, scaledFrame) &lt; 0) {&#xA;                        cout &lt;&lt; "Cannot encode current audio packet " &lt;&lt; endl;&#xA;                        exit(1);&#xA;                    }&#xA;                    while (ret >= 0) {&#xA;                        ret = avcodec_receive_packet(outAudioCodecContext, outPacket);&#xA;                        if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)&#xA;                            break;&#xA;                        else if (ret &lt; 0) {&#xA;                            cerr &lt;&lt; "Error during encoding" &lt;&lt; endl;&#xA;                            exit(1);&#xA;                        }&#xA;                        av_packet_rescale_ts(outPacket, outAudioCodecContext->time_base, outAVFormatContext->streams[outAudioStreamIndex]->time_base);&#xA;&#xA;                        outPacket->stream_index = outAudioStreamIndex;&#xA;&#xA;                        write_lock.lock();&#xA;                        &#xA;                        if (av_write_frame(outAVFormatContext, outPacket) != 0)&#xA;                        {&#xA;                            cerr &lt;&lt; "Error in writing audio frame" &lt;&lt; endl;&#xA;                        }&#xA;                        write_lock.unlock();&#xA;                        av_packet_unref(outPacket);&#xA;                    }&#xA;                    ret = 0;&#xA;                }&#xA;                av_frame_free(&amp;scaledFrame);&#xA;                av_packet_unref(outPacket);&#xA;            }&#xA;        }&#xA;    }&#xA;}&#xA;&#xA;int ScreenRecorder::captureVideoFrames() {&#xA;    int64_t pts = 0;&#xA;    int flag;&#xA;    int frameFinished = 0;&#xA;    bool endPause = false;&#xA;    int numPause = 0;&#xA;&#xA;    ofstream outFile{ "..\\media\\log.txt", ios::out };&#xA;&#xA;    int frameIndex = 0;&#xA;    value = 0;&#xA;&#xA;    pAVPacket = (AVPacket*)av_malloc(sizeof(AVPacket));&#xA;    if (pAVPacket == nullptr) {&#xA;        cerr &lt;&lt; "Error in allocating AVPacket" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    pAVFrame = av_frame_alloc();&#xA;    if (pAVFrame == nullptr) {&#xA;        cerr &lt;&lt; "Error: unable to alloc the AVFrame resources" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    outFrame = av_frame_alloc();&#xA;    if (outFrame == nullptr) {&#xA;        cerr &lt;&lt; "Error: unable to alloc the AVFrame resources for out frame" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    int videoOutBuffSize;&#xA;    int nBytes = av_image_get_buffer_size(outVideoCodecContext->pix_fmt, outVideoCodecContext->width, outVideoCodecContext->height, 32);&#xA;    uint8_t* videoOutBuff = (uint8_t*)av_malloc(nBytes);&#xA;&#xA;    if (videoOutBuff == nullptr) {&#xA;        cerr &lt;&lt; "Error: unable to allocate memory" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;&#xA;    value = av_image_fill_arrays(outFrame->data, outFrame->linesize, videoOutBuff, AV_PIX_FMT_YUV420P, outVideoCodecContext->width, outVideoCodecContext->height, 1);&#xA;    if (value &lt; 0) {&#xA;        cerr &lt;&lt; "Error in filling image array" &lt;&lt; endl;&#xA;    }&#xA;&#xA;    SwsContext* swsCtx_;&#xA;    if (avcodec_open2(pAVCodecContext, pAVCodec, nullptr) &lt; 0) {&#xA;        cerr &lt;&lt; "Could not open codec" &lt;&lt; endl;&#xA;        exit(-1);&#xA;    }&#xA;    swsCtx_ = sws_getContext(pAVCodecContext->width, pAVCodecContext->height, pAVCodecContext->pix_fmt, outVideoCodecContext->width, outVideoCodecContext->height, outVideoCodecContext->pix_fmt, SWS_BICUBIC,&#xA;        nullptr, nullptr, nullptr);&#xA;&#xA;    AVPacket outPacket;&#xA;    int gotPicture;&#xA;&#xA;    time_t startTime;&#xA;    time(&amp;startTime);&#xA;&#xA;    while (true) {&#xA;&#xA;        if (pauseCapture) {&#xA;            cout &lt;&lt; "Pause" &lt;&lt; endl;&#xA;            outFile &lt;&lt; "///////////////////   Pause  ///////////////////" &lt;&lt; endl;&#xA;            cout &lt;&lt; "outVideoCodecContext->time_base: " &lt;&lt; outVideoCodecContext->time_base.num &lt;&lt; ", " &lt;&lt; outVideoCodecContext->time_base.den &lt;&lt; endl;&#xA;        }&#xA;        cv.wait(ul, [this]() { return !pauseCapture; });   //pause capture (not busy waiting)&#xA;        if (endPause) {&#xA;            endPause = false;&#xA;        }&#xA;&#xA;        if (stopCapture)  //check if the capture has to stop&#xA;            break;&#xA;        ul.unlock();&#xA;&#xA;        if (av_read_frame(pAVFormatContext, pAVPacket) >= 0 &amp;&amp; pAVPacket->stream_index == VideoStreamIndx) {&#xA;            av_packet_rescale_ts(pAVPacket, pAVFormatContext->streams[VideoStreamIndx]->time_base, pAVCodecContext->time_base);&#xA;            value = avcodec_decode_video2(pAVCodecContext, pAVFrame, &amp;frameFinished, pAVPacket);&#xA;            if (value &lt; 0) {&#xA;                cout &lt;&lt; "Unable to decode video" &lt;&lt; endl;&#xA;            }&#xA;&#xA;            if (frameFinished) { //frame successfully decoded&#xA;                //sws_scale(swsCtx_, pAVFrame->data, pAVFrame->linesize, 0, pAVCodecContext->height, outFrame->data, outFrame->linesize);&#xA;                av_init_packet(&amp;outPacket);&#xA;                outPacket.data = nullptr;&#xA;                outPacket.size = 0;&#xA;&#xA;                if (outAVFormatContext->streams[outVideoStreamIndex]->start_time &lt;= 0) {&#xA;                    outAVFormatContext->streams[outVideoStreamIndex]->start_time = pAVFrame->pts;&#xA;                }&#xA;&#xA;                //disable warning on the console&#xA;                outFrame->width = outVideoCodecContext->width;&#xA;                outFrame->height = outVideoCodecContext->height;&#xA;                outFrame->format = outVideoCodecContext->pix_fmt;&#xA;&#xA;                sws_scale(swsCtx_, pAVFrame->data, pAVFrame->linesize, 0, pAVCodecContext->height, outFrame->data, outFrame->linesize);&#xA;&#xA;                avcodec_encode_video2(outVideoCodecContext, &amp;outPacket, outFrame, &amp;gotPicture);&#xA;&#xA;                if (gotPicture) {&#xA;                    if (outPacket.pts != AV_NOPTS_VALUE) {&#xA;                        outPacket.pts = av_rescale_q(outPacket.pts, videoSt->codec->time_base, videoSt->time_base);&#xA;                    }&#xA;                    if (outPacket.dts != AV_NOPTS_VALUE) {&#xA;                        outPacket.dts = av_rescale_q(outPacket.dts, videoSt->codec->time_base, videoSt->time_base);&#xA;                    }&#xA;&#xA;                    //cout &lt;&lt; "Write frame " &lt;&lt; j&#x2B;&#x2B; &lt;&lt; " (size = " &lt;&lt; outPacket.size / 1000 &lt;&lt; ")" &lt;&lt; endl;&#xA;                    //cout &lt;&lt; "(size = " &lt;&lt; outPacket.size &lt;&lt; ")" &lt;&lt; endl;&#xA;&#xA;                    //av_packet_rescale_ts(&amp;outPacket, outVideoCodecContext->time_base, outAVFormatContext->streams[outVideoStreamIndex]->time_base);&#xA;                    //outPacket.stream_index = outVideoStreamIndex;&#xA;&#xA;                    outFile &lt;&lt; "outPacket->duration: " &lt;&lt; outPacket.duration &lt;&lt; ", " &lt;&lt; "pAVPacket->duration: " &lt;&lt; pAVPacket->duration &lt;&lt; endl;&#xA;                    outFile &lt;&lt; "outPacket->pts: " &lt;&lt; outPacket.pts &lt;&lt; ", " &lt;&lt; "pAVPacket->pts: " &lt;&lt; pAVPacket->pts &lt;&lt; endl;&#xA;                    outFile &lt;&lt; "outPacket.dts: " &lt;&lt; outPacket.dts &lt;&lt; ", " &lt;&lt; "pAVPacket->dts: " &lt;&lt; pAVPacket->dts &lt;&lt; endl;&#xA;&#xA;                    time_t timer;&#xA;                    double seconds;&#xA;&#xA;                    mu.lock();&#xA;                    if (!activeMenu) {&#xA;                        time(&amp;timer);&#xA;                        seconds = difftime(timer, startTime);&#xA;                        int h = (int)(seconds / 3600);&#xA;                        int m = (int)(seconds / 60) % 60;&#xA;                        int s = (int)(seconds) % 60;&#xA;&#xA;                        std::cout &lt;&lt; std::flush &lt;&lt; "\r" &lt;&lt; std::setw(2) &lt;&lt; std::setfill(&#x27;0&#x27;) &lt;&lt; h &lt;&lt; &#x27;:&#x27;&#xA;                            &lt;&lt; std::setw(2) &lt;&lt; std::setfill(&#x27;0&#x27;) &lt;&lt; m &lt;&lt; &#x27;:&#x27;&#xA;                            &lt;&lt; std::setw(2) &lt;&lt; std::setfill(&#x27;0&#x27;) &lt;&lt; s &lt;&lt; std::flush;&#xA;                    }&#xA;                    mu.unlock();&#xA;&#xA;                    write_lock.lock();&#xA;                    if (av_write_frame(outAVFormatContext, &amp;outPacket) != 0) {&#xA;                        cerr &lt;&lt; "Error in writing video frame" &lt;&lt; endl;&#xA;                    }&#xA;                    write_lock.unlock();&#xA;                    av_packet_unref(&amp;outPacket);&#xA;                }&#xA;&#xA;                av_packet_unref(&amp;outPacket);&#xA;                av_free_packet(pAVPacket);  //avoid memory saturation&#xA;            }&#xA;        }&#xA;    }&#xA;&#xA;    outFile.close();&#xA;&#xA;    av_free(videoOutBuff);&#xA;&#xA;    return 0;&#xA;}&#xA;

    &#xA;