Recherche avancée

Médias (0)

Mot : - Tags -/xmlrpc

Aucun média correspondant à vos critères n’est disponible sur le site.

Autres articles (47)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

  • (Dés)Activation de fonctionnalités (plugins)

    18 février 2011, par

    Pour gérer l’ajout et la suppression de fonctionnalités supplémentaires (ou plugins), MediaSPIP utilise à partir de la version 0.2 SVP.
    SVP permet l’activation facile de plugins depuis l’espace de configuration de MediaSPIP.
    Pour y accéder, il suffit de se rendre dans l’espace de configuration puis de se rendre sur la page "Gestion des plugins".
    MediaSPIP est fourni par défaut avec l’ensemble des plugins dits "compatibles", ils ont été testés et intégrés afin de fonctionner parfaitement avec chaque (...)

Sur d’autres sites (7692)

  • A Beginner’s Guide to Omnichannel Analytics

    14 avril 2024, par Erin

    Linear customer journeys are as obsolete as dial-up internet and floppy disks. As a marketing manager, you know better than anyone that customers interact with your brand hundreds of times across dozens of channels before purchasing. That can make tracking them a nightmare unless you build an omnichannel analytics solution. 

    Alas, if only it were that simple. 

    Unfortunately, it’s not enough to collect data on your customers’ complex journeys just by buying an omnichannel platform. You need to generate actionable insights by using marketing attribution to tie channels to conversions. 

    This article will explain how to build a useful omnichannel analytics solution that lets you understand and improve the customer journey.

    What is omnichannel analytics ?

    Omnichannel analytics collects and analyses customer data from every touchpoint and device. The goal is to collect all this omnichannel data in one place, creating a single, real-time, unified view of your customer’s journey.

    What is omnichannel analytics

    Unfortunately, most businesses haven’t achieved this yet. As Karen Lellouche Tordjman and Marco Bertini say :

    “Despite all the buzz around the concept of omnichannel, most companies still view customer journeys as a linear sequence of standardised touchpoints within a given channel. But the future of customer engagement transforms touchpoints from nodes along a predefined distribution path to full-blown portals that can serve as points of sale or pathways to many other digital and virtual interactions. They link to chatbots, kiosks, robo-advisors, and other tools that customers — especially younger ones — want to engage with.”

    However, doing so is more important than ever — especially when consumers have over 300 digital touchpoints, and the average number of touchpoints in the B2B buyer journey is 27.

    Not only that, but customers expect personalised experiences across every platform — that’s the kind you can only create when you have access to omnichannel data.

    A diagram showing how complex customer journeys are

    What might omnichannel analytics look like in practice for an e-commerce store ?

    An online store would integrate data from channels like its website, mobile app, social media accounts, Google Ads and customer service records. This would show how customers find its brand, how they use each channel to interact with it and which channels convert the most customers. 

    This would allow the e-commerce store to tailor marketing channels to customers’ needs. For instance, they could focus social media use on product discovery and customer support. Google Ads campaigns could target the best-converting products. While all this is happening, the store could also ensure every channel looks the same and delivers the same experience. 

    What are the benefits of omnichannel analytics ?

    Why go to all the trouble of creating a comprehensive view of the customer’s experience ? Because you stand to gain some pretty significant benefits when implementing omnichannel analytics.

    What are the benefits of omnichannel analytics?

    Understand the customer journey

    You want to understand how your customers behave, right ? No other method will allow you to fully understand your customer journey the way omnichannel analytics does. 

    It doesn’t matter how customers engage with your brand — whether that’s your website, app, social media profiles or physical stores — omnichannel analytics capture every interaction.

    With this 360-degree view of your customers, it’s easy to understand how they move between channels, where they encounter issues and what bottlenecks prevent them from converting. 

    Deliver better personalisation

    We don’t have to tell you that personalisation matters. But do you know just how important it is ? Since 56% of customers will become repeat buyers after a personalised experience, delivering them as often as possible is critical. 

    Omnichannel analytics helps in your quest for personalisation by highlighting the individual preferences of customer segments. For example, e-commerce stores can use omnichannel analytics to understand how shoppers behave across different devices and tailor their offers accordingly. 

    Upgrade the customer experience

    Omnichannel analytics gives you the insights to improve every aspect of the customer experience. 

    For starters, you can ensure a consistent brand experience across all your top channels by making sure they look and behave the same.

    Then, you can use omnichannel insights to tailor each channel to your customers’ requirements. For example, most people interacting with your brand on social media may seek support. Knowing that you can create dedicated support accounts to assist users. 

    Improve marketing campaigns

    Which marketing campaigns or traffic sources convert the most customers ? How can you improve these campaigns ? Omnichannel analytics has the answers. 

    When you implement omnichannel analytics you automatically track the performance of every marketing channel by attributing each conversion to one or more traffic sources. This lets you see whether Google Ads bring in more customers than your SEO efforts. Or whether social media ads are the most profitable acquisition channel. 

    Armed with this information, you can improve your marketing efforts — either by focusing on your profitable channels or rectifying problems that stop less profitable channels from converting.

    What are the challenges of omnichannel analytics ?

    There are three challenges when implementing an omnichannel analytics solution :

    What are the challenges of omnichannel analytics?
    • Complex customer journeys : Customer journeys aren’t linear and can be incredibly difficult to track. 
    • Regulatory and privacy issues : When you start gathering customer data, you quickly come up against consumer privacy laws. 
    • No underlying goal : There has to be a reason to go to all this effort, but brands don’t always have goals in mind before they start. 

    You can’t do anything about the first challenge. 

    After all, your customer journey will almost never be linear. And isn’t the point of implementing an omnichannel solution to understand these complex journeys in the first place ? Once you set up omnichannel analytics, these journeys will be much easier to decipher. 

    As for the other two :

    Using the right software that respects user privacy and complies with all major privacy laws will avoid regulatory issues. Take Matomo, for instance. Our software was designed with privacy in mind and is configured to follow the strictest privacy laws, such as GDPR. 

    Tying omnichannel analytics to marketing attribution will solve the final challenge by giving your omnichannel efforts a goal. When you tie omnichannel analytics to your marketing efforts, you aren’t just getting a 360-degree view of your customer journey for the sake of it. You are getting that view to improve your marketing efforts and increase sales.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    How to set up an omnichannel analytics solution

    Want to set up a seamless analytical environment that incorporates data from every possible source ? Follow these five steps :

    Choose one or more analytics providers

    You can use several tools to build an omnichannel analytics solution. These include web and app analytics tools, customer data platforms that centralise first-party data and business intelligence tools (typically used for visualisation). 

    Which tools you use will depend on your goals and your budget — the loftier your ambitions and the higher your budget, the more tools you can use. 

    Ideally, you should use as few tools as possible to capture your data. Most teams won’t need business intelligence platforms, for example. However, you may or may not need both an analytics platform and a customer data platform. Your decision will depend on how many channels your customers use and how well your analytics tool tracks everything.

    If it can capture web and app usage while integrating with third-party platforms like your back-end e-commerce platform, then it’s probably enough.

    Collect accurate data at every touchpoint 

    Your omnichannel analytics efforts hinge on the quantity and quality of data you can collect. You want to gather data from every touchpoint possible and store that data in as few places as possible. That’s why choosing as few tools as possible in the step above is so important. 

    So, where should you start ? Common data sources include :

    • Your website
    • Apps (iOS and Android)
    • Social media profiles
    • ERPs
    • PoS systems

    At the same time, make sure you’re tracking all relevant metrics. Revenue, customer engagement and conversion-focused metrics like conversion rate, dwell time, cart abandonment rate and churn rate are particularly important. 

    Set up marketing attribution

    Setting up marketing attribution (also known as multi-touch attribution) is essential to tie omnichannel data to business goals. It’s the only way to know exactly how valuable each marketing channel is and where each customer comes from. 

    You’ll want to use multi-touch attribution, given you have data from across the customer journey.

    Image of six different attribution models

    Multi-touch attribution models can include (but are not limited to) :

    • Linear : where each touchpoint is given equal weighting
    • Time decay : where touchpoints are more valuable the nearer they are to conversion
    • Position-based : where the first and last touch points are more valuable than all the others. 

    You don’t have to use just one of the models above, however. One of the benefits of using a web analytics tool like Matomo is that you can choose between different attribution models and compare them.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    Create reports that help you visualise data

    Dashboards are your friend here. They’ll let you see KPIs at a glance, allowing you to keep track of day-to-day changes in your customer journey. Ideally, you’ll want a platform that lets you customise dashboard widgets so only relevant KPIs are shown. 

    A custom graph created in Matomo

    Setting up standard and custom reports is also important. Custom reports allow you to choose metrics and dimensions that align with your goals. They will also allow you to present your data most meaningfully to your team, increasing the likelihood they act upon insights. 

    Analyse data and take action

    Now that you have customer journey data at your fingertips, it’s time to analyse it. After all, there’s no point in implementing an omnichannel analytics solution if you aren’t going to take action. 

    If you’re unsure where to start, re-read the benefits we listed at the start of this article. You could use your omnichannel insights to improve your marketing campaigns by doubling down on the channels that bring in the best customers.

    Or you could identify (and fix) bottlenecks in the customer journey so customers are less likely to fall out of your funnel between certain channels. 

    Just make sure you take action based on your data alone.

    Make the most of omnichannel analytics with Matomo

    A comprehensive web and app analytics platform is vital to any omnichannel analytics strategy. 

    But not just any solution will do. When privacy regulations impede an omnichannel analytics solution, you need a platform to capture accurate data without breaking privacy laws or your users’ trust. 

    That’s where Matomo comes in. Our privacy-friendly web analytics platform ensures accurate tracking of web traffic while keeping you compliant with even the strictest regulations. Moreover, our range of APIs and SDKs makes it easy to track interactions from all your digital products (website, apps, e-commerce back-ends, etc.) in one place. 

    Try Matomo for free for 21 days. No credit card required.

  • ffmpeg failed to load audio file

    14 avril 2024, par Vaishnav Ghenge
    Failed to load audio: ffmpeg version 5.1.4-0+deb12u1 Copyright (c) Failed to load audio: ffmpeg version 5.1.4-0+deb12u1 Copyright (c) 2000-2023 the FFmpeg developers
  built with gcc 12 (Debian 12.2.0-14)
  configuration: --prefix=/usr --extra-version=0+deb12u1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librist --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --disable-sndio --enable-libjxl --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-libplacebo --enable-librav1e --enable-shared
  libavutil      57. 28.100 / 57. 28.100
  libavcodec     59. 37.100 / 59. 37.100
  libavformat    59. 27.100 / 59. 27.100
  libavdevice    59.  7.100 / 59.  7.100
  libavfilter     8. 44.100 /  8. 44.100
  libswscale      6.  7.100 /  6.  7.100
  libswresample   4.  7.100 /  4.  7.100
  libpostproc    56.  6.100 / 56.  6.100
/tmp/tmpjlchcpdm.wav: Invalid data found when processing input


    


    backend :

    


    
@app.route("/transcribe", methods=["POST"])
def transcribe():
    # Check if audio file is present in the request
    if 'audio_file' not in request.files:
        return jsonify({"error": "No file part"}), 400
    
    audio_file = request.files.get('audio_file')

    # Check if audio_file is sent in files
    if not audio_file:
        return jsonify({"error": "`audio_file` is missing in request.files"}), 400

    # Check if the file is present
    if audio_file.filename == '':
        return jsonify({"error": "No selected file"}), 400

    # Save the file with a unique name
    filename = secure_filename(audio_file.filename)
    unique_filename = os.path.join("uploads", str(uuid.uuid4()) + '_' + filename)
    # audio_file.save(unique_filename)
    
    # Read the contents of the audio file
    contents = audio_file.read()

    max_file_size = 500 * 1024 * 1024
    if len(contents) > max_file_size:
        return jsonify({"error": "File is too large"}), 400

    # Check if the file extension suggests it's a WAV file
    if not filename.lower().endswith('.wav'):
        # Delete the file if it's not a WAV file
        os.remove(unique_filename)
        return jsonify({"error": "Only WAV files are supported"}), 400

    print(f"\033[92m{filename}\033[0m")

    # Call Celery task asynchronously
    result = transcribe_audio.delay(contents)

    return jsonify({
        "task_id": result.id,
        "status": "pending"
    })


@celery_app.task
def transcribe_audio(contents):
    # Transcribe the audio
    try:
        # Create a temporary file to save the audio data
        with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as temp_audio:
            temp_path = temp_audio.name
            temp_audio.write(contents)

            print(f"\033[92mFile temporary path: {temp_path}\033[0m")
            transcribe_start_time = time.time()

            # Transcribe the audio
            transcription = transcribe_with_whisper(temp_path)
            
            transcribe_end_time = time.time()
            print(f"\033[92mTranscripted text: {transcription}\033[0m")

            return transcription, transcribe_end_time - transcribe_start_time

    except Exception as e:
        print(f"\033[92mError: {e}\033[0m")
        return str(e)


    


    frontend :

    


        useEffect(() => {
        const init = () => {
            navigator.mediaDevices.getUserMedia({audio: true})
                .then((audioStream) => {
                    const recorder = new MediaRecorder(audioStream);

                    recorder.ondataavailable = e => {
                        if (e.data.size > 0) {
                            setChunks(prevChunks => [...prevChunks, e.data]);
                        }
                    };

                    recorder.onerror = (e) => {
                        console.log("error: ", e);
                    }

                    recorder.onstart = () => {
                        console.log("started");
                    }

                    recorder.start();

                    setStream(audioStream);
                    setRecorder(recorder);
                });
        }

        init();

        return () => {
            if (recorder && recorder.state === 'recording') {
                recorder.stop();
            }

            if (stream) {
                stream.getTracks().forEach(track => track.stop());
            }
        }
    }, []);

    useEffect(() => {
        // Send chunks of audio data to the backend at regular intervals
        const intervalId = setInterval(() => {
            if (recorder && recorder.state === 'recording') {
                recorder.requestData(); // Trigger data available event
            }
        }, 8000); // Adjust the interval as needed


        return () => {
            if (intervalId) {
                console.log("Interval cleared");
                clearInterval(intervalId);
            }
        };
    }, [recorder]);

    useEffect(() => {
        const processAudio = async () => {
            if (chunks.length > 0) {
                // Send the latest chunk to the server for transcription
                const latestChunk = chunks[chunks.length - 1];

                const audioBlob = new Blob([latestChunk]);
                convertBlobToAudioFile(audioBlob);
            }
        };

        void processAudio();
    }, [chunks]);

    const convertBlobToAudioFile = useCallback((blob: Blob) => {
        // Convert Blob to audio file (e.g., WAV)
        // This conversion may require using a third-party library or service
        // For example, you can use the MediaRecorder API to record audio in WAV format directly
        // Alternatively, you can use a library like recorderjs to perform the conversion
        // Here's a simplified example using recorderjs:

        const reader = new FileReader();
        reader.onload = () => {
            const audioBuffer = reader.result; // ArrayBuffer containing audio data

            // Send audioBuffer to Flask server or perform further processing
            sendAudioToFlask(audioBuffer as ArrayBuffer);
        };

        reader.readAsArrayBuffer(blob);
    }, []);

    const sendAudioToFlask = useCallback((audioBuffer: ArrayBuffer) => {
        const formData = new FormData();
        formData.append('audio_file', new Blob([audioBuffer]), `speech_audio.wav`);

        console.log(formData.get("audio_file"));

        fetch('http://34.87.75.138:8000/transcribe', {
            method: 'POST',
            body: formData
        })
            .then(response => response.json())
            .then((data: { task_id: string, status: string }) => {
                pendingTaskIdsRef.current.push(data.task_id);
            })
            .catch(error => {
                console.error('Error sending audio to Flask server:', error);
            });
    }, []);


    


    I was trying to pass the audio from frontend to whisper model which is in flask app

    


  • FFmpeg RTSP drop rate increases when frame rate is reduced

    13 avril 2024, par Avishka Perera

    I need to read an RTSP stream, process the images individually in Python, and then write the images back to an RTSP stream. As the RTSP server, I am using Mediamtx [1]. For streaming, I am using FFmpeg [2].

    


    I have the following code that works perfectly fine. For simplification purposes, I am streaming three generated images.

    


    import time
import numpy as np
import subprocess

width, height = 640, 480
fps = 25
rtsp_server_address = f"rtsp://localhost:8554/mystream"

ffmpeg_cmd = [
    "ffmpeg",
    "-re",
    "-f",
    "rawvideo",
    "-pix_fmt",
    "rgb24",
    "-s",
    f"{width}x{height}",
    "-i",
    "-",
    "-r",
    str(fps),
    "-avoid_negative_ts",
    "make_zero",
    "-vcodec",
    "libx264",
    "-threads",
    "4",
    "-f",
    "rtsp",
    rtsp_server_address,
]
colors = np.array(
    [
        [255, 0, 0],
        [0, 255, 0],
        [0, 0, 255],
    ]
).reshape(3, 1, 1, 3)
images = (np.ones((3, width, height, 3)) * colors).astype(np.uint8)

if __name__ == "__main__":

    process = subprocess.Popen(ffmpeg_cmd, stdin=subprocess.PIPE)
    start = time.time()
    exported = 0
    while True:
        exported += 1
        next_time = start + exported / fps
        now = time.time()
        if next_time > now:
            sleep_dur = next_time - now
            time.sleep(sleep_dur)

        image = images[exported % 3]
        image_bytes = image.tobytes()

        process.stdin.write(image_bytes)
        process.stdin.flush()

    process.stdin.close()
    process.wait()


    


    The issue is, that I need to run this at 10 fps because the processing step is heavy and can only afford 10 fps. Hence, as I reduce the frame rate from 25 to 10, the drop rate increases from 0% to 100%. And after a few iterations, I get a BrokenPipeError: [Errno 32] Broken pipe. Refer to the appendix for the complete log.

    


    As an alternative, I can use OpenCV compiled from source with GStreamer [3], but I prefer using FFmpeg to make the shipping process simple. Since compiling OpenCV from source can be tedious and dependent on the system.

    


    References

    


    [1] Mediamtx (formerly rtsp-simple-server) : https://github.com/bluenviron/mediamtx

    


    [2] FFmpeg : https://github.com/FFmpeg/FFmpeg

    


    [3] Compile OpenCV with GStreamer : https://github.com/bluenviron/mediamtx?tab=readme-ov-file#opencv

    


    Appendix

    


    Creating the source stream

    


    To instantiate the unprocessed stream, I use the following command. This streams the content of my webcam as and RTSP stream.

    


    ffmpeg -video_size 1280x720 -i /dev/video0  -avoid_negative_ts make_zero -vcodec libx264 -r 10 -f rtsp rtsp://localhost:8554/webcam


    


    Error log

    


    ffmpeg version 6.1.1 Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.3.0 (conda-forge gcc 12.3.0-5)&#xA;  configuration: --prefix=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_plac --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-c&#x2B;&#x2B; --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --disable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --enable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libopus --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1712656518955/_build_env/bin/pkg-config&#xA;  libavutil      58. 29.100 / 58. 29.100&#xA;  libavcodec     60. 31.102 / 60. 31.102&#xA;  libavformat    60. 16.100 / 60. 16.100&#xA;  libavdevice    60.  3.100 / 60.  3.100&#xA;  libavfilter     9. 12.100 /  9. 12.100&#xA;  libswscale      7.  5.100 /  7.  5.100&#xA;  libswresample   4. 12.100 /  4. 12.100&#xA;  libpostproc    57.  3.100 / 57.  3.100&#xA;Input #0, rawvideo, from &#x27;fd:&#x27;:&#xA;  Duration: N/A, start: 0.000000, bitrate: 184320 kb/s&#xA;  Stream #0:0: Video: rawvideo (RGB[24] / 0x18424752), rgb24, 640x480, 184320 kb/s, 25 tbr, 25 tbn&#xA;Stream mapping:&#xA;  Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))&#xA;[libx264 @ 0x5e2ef8b01340] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX FMA3 BMI2 AVX2&#xA;[libx264 @ 0x5e2ef8b01340] profile High 4:4:4 Predictive, level 2.2, 4:4:4, 8-bit&#xA;[libx264 @ 0x5e2ef8b01340] 264 - core 164 r3095 baee400 - H.264/MPEG-4 AVC codec - Copyleft 2003-2022 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=4 threads=4 lookahead_threads=1 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=10 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00&#xA;Output #0, rtsp, to &#x27;rtsp://localhost:8554/mystream&#x27;:&#xA;  Metadata:&#xA;    encoder         : Lavf60.16.100&#xA;  Stream #0:0: Video: h264, yuv444p(tv, progressive), 640x480, q=2-31, 10 fps, 90k tbn&#xA;    Metadata:&#xA;      encoder         : Lavc60.31.102 libx264&#xA;    Side data:&#xA;      cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: N/A&#xA;[vost#0:0/libx264 @ 0x5e2ef8b01080] Error submitting a packet to the muxer: Broken pipe   &#xA;[out#0/rtsp @ 0x5e2ef8afd780] Error muxing a packet&#xA;[out#0/rtsp @ 0x5e2ef8afd780] video:1kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown&#xA;frame=    1 fps=0.1 q=-1.0 Lsize=N/A time=00:00:04.70 bitrate=N/A dup=0 drop=70 speed=0.389x    &#xA;[libx264 @ 0x5e2ef8b01340] frame I:16    Avg QP: 6.00  size:   147&#xA;[libx264 @ 0x5e2ef8b01340] frame P:17    Avg QP: 9.94  size:   101&#xA;[libx264 @ 0x5e2ef8b01340] frame B:17    Avg QP: 9.94  size:    64&#xA;[libx264 @ 0x5e2ef8b01340] consecutive B-frames: 50.0%  0.0% 42.0%  8.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb I  I16..4: 81.3% 18.7%  0.0%&#xA;[libx264 @ 0x5e2ef8b01340] mb P  I16..4: 52.9%  0.0%  0.0%  P16..4:  0.0%  0.0%  0.0%  0.0%  0.0%    skip:47.1%&#xA;[libx264 @ 0x5e2ef8b01340] mb B  I16..4:  0.0%  5.9%  0.0%  B16..8:  0.1%  0.0%  0.0%  direct: 0.0%  skip:94.0%  L0:56.2% L1:43.8% BI: 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] 8x8 transform intra:15.4% inter:100.0%&#xA;[libx264 @ 0x5e2ef8b01340] coded y,u,v intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%&#xA;[libx264 @ 0x5e2ef8b01340] i16 v,h,dc,p: 97%  0%  3%  0%&#xA;[libx264 @ 0x5e2ef8b01340] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu:  0%  0% 100%  0%  0%  0%  0%  0%  0%&#xA;[libx264 @ 0x5e2ef8b01340] Weighted P-Frames: Y:52.9% UV:52.9%&#xA;[libx264 @ 0x5e2ef8b01340] ref P L0: 88.9%  0.0%  0.0% 11.1%&#xA;[libx264 @ 0x5e2ef8b01340] kb/s:8.27&#xA;Conversion failed!&#xA;Traceback (most recent call last):&#xA;  File "/home/avishka/projects/read-process-stream/minimal-ffmpeg-error.py", line 58, in <module>&#xA;    process.stdin.write(image_bytes)&#xA;BrokenPipeError: [Errno 32] Broken pipe&#xA;</module>

    &#xA;