Recherche avancée

Médias (1)

Mot : - Tags -/pirate bay

Autres articles (92)

  • Mise à jour de la version 0.1 vers 0.2

    24 juin 2013, par

    Explications des différents changements notables lors du passage de la version 0.1 de MediaSPIP à la version 0.3. Quelles sont les nouveautés
    Au niveau des dépendances logicielles Utilisation des dernières versions de FFMpeg (>= v1.2.1) ; Installation des dépendances pour Smush ; Installation de MediaInfo et FFprobe pour la récupération des métadonnées ; On n’utilise plus ffmpeg2theora ; On n’installe plus flvtool2 au profit de flvtool++ ; On n’installe plus ffmpeg-php qui n’est plus maintenu au (...)

  • Personnaliser en ajoutant son logo, sa bannière ou son image de fond

    5 septembre 2013, par

    Certains thèmes prennent en compte trois éléments de personnalisation : l’ajout d’un logo ; l’ajout d’une bannière l’ajout d’une image de fond ;

  • Ecrire une actualité

    21 juin 2013, par

    Présentez les changements dans votre MédiaSPIP ou les actualités de vos projets sur votre MédiaSPIP grâce à la rubrique actualités.
    Dans le thème par défaut spipeo de MédiaSPIP, les actualités sont affichées en bas de la page principale sous les éditoriaux.
    Vous pouvez personnaliser le formulaire de création d’une actualité.
    Formulaire de création d’une actualité Dans le cas d’un document de type actualité, les champs proposés par défaut sont : Date de publication ( personnaliser la date de publication ) (...)

Sur d’autres sites (10911)

  • 7 Fintech Marketing Strategies to Maximise Profits in 2024

    24 juillet 2024, par Erin

    Fintech investment skyrocketed in 2021, but funding tanked in the following two years. A -63% decline in fintech investment in 2023 saw the worst year in funding since 2017. Luckily, the correction quickly floored, and the fintech industry will recover in 2024, but companies will have to work much harder to secure funds.

    F-Prime’s The 2024 State of Fintech Report called 2023 the year of “regulation on, risk off” amid market pressures and regulatory scrutiny. Funding is rising again, but investors want regulatory compliance and stronger growth performance from fintech ventures.

    Here are seven fintech marketing strategies to generate the growth investors seek in 2024.

    Top fintech marketing challenges in 2024

    Following the worst global investment run since 2017 in 2023, fintech marketers need to readjust their goals to adapt to the current market challenges. The fintech honeymoon is over for Wall Street with regulator scrutiny, closures, and a distinct lack of profitability giving investors cold feet.

    Here are the biggest challenges fintech marketers face in 2024 :

    • Market correction : With fewer rounds and longer times between them, securing funds is a major challenge for fintech businesses. F-Prime’s The 2024 State of Fintech Report warns of “a high probability of significant shutdowns in 2024 and 2025,” highlighting the importance of allocating resources and budgets effectively.
    • Contraction : Aside from VC funding decreasing by 64% in 2023, the payments category now attracts a large majority of fintech investment, meaning there’s a smaller share from a smaller pot to go around for everyone else.
    • Competition : The biggest names in finance have navigated heavy disruption from startups and, for the most part, emerged stronger than ever. Meanwhile, fintech is no longer Wall Street’s hottest commodity as investors turn their attention to AI.
    • Regulations : Regulatory scrutiny of fintech intensified in 2023 – particularly in the US – contributing to the “regulation on, risk off” summary of F-Prime’s report.
    • Investor scrutiny : With market and industry challenges intensifying, investors are putting their money behind “safer” ventures that demonstrate real, sustainable profitability, not short-term growth.
    • Customer loyalty : Even in traditional baking and finance, switching is surging as customers seek providers who better meet their needs. To achieve the sustainable growth investors are looking for, fintech startups need to know their ideal customer profile (ICP), tailor their products/services and fintech marketing campaigns to them, and retain them throughout the customer lifecycle.
    A tree map comparing fintech investment from 2021 to 2023
    (Source)

    The good news for fintech marketers is that the market correction is leveling out in 2024. In The 2024 State of Fintech Report, F-Prime says that “heading into 2024, we see the fintech market amid a rebound,” while McKinsey expects fintech revenue to grow “almost three times faster than those in the traditional banking sector between 2023 and 2028.”

    Winning back investor confidence won’t be easy, though. F-Prime acknowledges that investors are prioritising high-performance fintech ventures, particularly those with high gross margins. Fintech marketers need to abandon the growth-at-all-costs mindset and switch to a data-driven optimisation, growth and revenue system.

    7 fintech marketing strategies

    Given the current state of the fintech industry and relatively low levels of investor confidence, fintech marketers’ priority is building a new culture of sustainable profit. This starts with rethinking priorities and switching up the marketing goals to reflect longer-term ambitions.

    So, here are the fintech marketing strategies that matter most in 2024.

    1. Optimise for profitability over growth at all costs

    To progress from the growth-at-all-cost mindset, fintech marketers need to optimise for different KPIs. Instead of flexing metrics like customer growth rate, fintech companies need to take a more balanced approach to measuring sustainable profitability.

    This means holding on to existing customers – and maximising their value – while they acquire new customers. It also means that, instead of trying to make everyone a target customer, you concentrate on targeting the most valuable prospects, even if it results in a smaller overall user base.

    Optimising for profitability starts with putting vanity metrics in their place and pinpointing the KPIs that represent valuable business growth :

    • Gross profit margin
    • Revenue growth rate
    • Cash flow
    • Monthly active user growth (qualify “active” as completing a transaction)
    • Customer acquisition cost
    • Customer retention rate
    • Customer lifetime value
    • Avg. revenue per user
    • Avg. transactions per month
    • Avg. transaction value

    With a more focused acquisition strategy, you can feed these insights into every company level. For example, you can prioritise customer engagement, revenue, retention, and customer service in product development and customer experience (CX).

    To ensure all marketing efforts are pulling towards these KPIs, you need an attribution system that accurately measures the contribution of each channel.

    Marketing attribution (aka multi-touch attribution) should be used to measure every touchpoint in the customer journey and accurately credit them for driving revenue. This helps you allocate the correct budget to the channels and campaigns, adding real value to the business (e.g., social media marketing vs content marketing).

    Example : Mastercard helps a digital bank acquire 10 million high-value customers

    For example, Mastercard helped a digital bank in Latin America achieve sustainable growth beyond customer acquisition. The fintech company wanted to increase revenue through targeted acquisition and profitable engagement metrics.

    Strategies included :

    • A more targeted acquisition strategy for high-value customers
    • Increasing avg. spend per customer
    • Reducing acquisition cost
    • Customer retention

    As a result, Mastercard’s advisors helped this fintech company acquire 10 million new customers in two years. More importantly, they increased customer spending by 28% while reducing acquisition costs by 13%, creating a more sustainable and profitable growth model.

    2. Use web and app analytics to remotivate users before they disengage

    Engagement is the key to customer retention and lifetime value. To prevent valuable customers from disengaging, you need to intervene when they show early signs of losing interest, but they’re still receptive to your incentivisation tactics (promotions, rewards, milestones, etc.).

    By integrating web and app analytics, you can identify churn patterns and pinpoint the sequences of actions that lead to disengaging. For example, you might determine that customers who only log in once a month, engage with one dashboard, or drop below a certain transaction rate are at high risk for churn.

    Using a tool like Matomo for web and app analytics, you can detect these early signs of disengagement. Once you identify your churn risks, you can create triggers to automatically fire re-engagement campaigns. You can also use CRM and session data to personalize campaigns to directly address the cause of disengagement, e.g., valuable content or incentives to increase transaction rates.

    Example : Dynamic Yield fintech re-engagement case study

    In this Dynamic Yield case study, one leading fintech company uses customer spending patterns to identify those most likely to disengage. The company set up automated campaigns with personalised in-app messaging, offering time-bound incentives to increase transaction rates.

    With fully automated re-engagement campaigns, this fintech company increased customer retention through valuable engagement and revenue-driving actions.

    3. Identify the path your most valuable customers take

    Why optimise web experiences for everyone when you can tailor the online journey for your most valuable customers ? Use customer segmentation to identify the shared interests and habits of your most valuable customers. You can learn a lot about customers based on where the pages they visit and the content they engage with before taking action.

    Use these insights to optimise funnels that motivate prospects displaying the same customer behaviours as your most valuable customers.

    Get 20-40% more data with Matomo

    One of the biggest issues with Google Analytics and many similar tools is that they produce inaccurate data due to data sampling. Once you collect a certain amount of data, Google reports estimates instead of giving you complete, accurate insights.

    This means you could be basing important business decisions on inaccurate data. Furthermore, when investors are nervous about the uncertainty surrounding fintech, the last thing they want is inaccurate data.

    Matomo is the reliable, accurate alternative to Google Analytics that uses no data sampling whatsoever. You get 100% access to your web analytics data, so you can base every decision on reliable insights. With Matomo, you can access between 20% and 40% more data compared to Google Analytics.

    Matomo no data sampling

    With Matomo, you can confidently unlock the full picture of your marketing efforts and give potential investors insights they can trust.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    4. Reduce onboarding dropouts with marketing automation

    Onboarding dropouts kill your chance of getting any return on your customer acquisition cost. You also miss out on developing a long-term relationship with users who fail to complete the onboarding process – a hit on immediate ROI and, potentially, long-term profits.

    The onboarding process also defines the first impression for customers and sets a precedent for their ongoing experience.

    An engaging onboarding experience converts more potential customers into active users and sets them up for repeat engagement and valuable actions.

    Example : Maxio reduces onboarding time by 30% with GUIDEcx

    Onboarding optimisation specialists, GUIDEcx helped Maxio cut six weeks off their onboarding times – a 30% reduction.

    With a shorter onboarding schedule, more customers are committing to close the deal during kick-off calls. Meanwhile, by increasing automated tasks by 20%, the company has unlocked a 40% increase in capacity, allowing it to handle more customers at any given time and multiplying its capacity to generate revenue.

    5. Increase the value in TTFV with personalisation

    Time to first value (TTFV) is a key metric for onboarding optimisation, but some actions are more valuable than others. By personalising the experience for new users, you can increase the value of their first action, increasing motivation to continue using your fintech product/service.

    The onboarding process is an opportunity to learn more about new customers and deliver the most rewarding user experience for their particular needs.

    Example : Betterment helps users put their money to work right away

    Betterment has implemented a quick, personalised onboarding system instead of the typical email signup process. The app wants to help new customers put their money to work right away, optimising for the first transaction during onboarding itself.

    It personalises the experience by prompting new users to choose their goals, set up the right account for them, and select the best portfolio to achieve their goals. They can complete their first investment within a matter of minutes and professional financial advice is only ever a click away.

    Optimise account signups with Matomo

    If you want to create and optimise a signup process like Betterment, you need an analytics system with a complete conversion rate optimisation (CRO) toolkit. 

    A screenshot of conversion reporting in Matomo

    Matomo includes all the CRO features you need to optimise user experience and increase signups. With heatmaps, session recordings, form analytics, and A/B testing, you can make data-driven decisions with confidence.

    Try Matomo for Free

    Get the web insights you need, without compromising data accuracy.

    No credit card required

    6. Use gamification to drive product engagement

    Gamification can create a more engaging experience and increase motivation for customers to continue using a product. The key is to reward valuable actions, engagement time, goal completions, and the small objectives that build up to bigger achievements.

    Gamification is most effective when used to help individuals achieve goals they’ve set for themselves, rather than the goals of others (e.g., an employer). This helps explain why it’s so valuable to fintech experience and how to implement effective gamification into products and services.

    Example : Credit Karma gamifies personal finance

    Credit Karma helps users improve their credit and build their net worth, subtly gamifying the entire experience.

    Users can set their financial goals and link all of their accounts to keep track of their assets in one place. The app helps users “see your wealth grow” with assets, debts, and investments all contributing to their next wealth as one easy-to-track figure.

    7. Personalise loyalty programs for retention and CLV

    Loyalty programs tap into similar psychology as gamification to motivate and reward engagement. Typically, the key difference is that – rather than earning rewards for themselves – you directly reward customers for their long-term loyalty.

    That being said, you can implement elements of gamification and personalisation into loyalty programs, too. 

    Example : Bank of America’s Preferred Rewards

    Bank of America’s Preferred Rewards program implements a tiered rewards system that rewards customers for their combined spending, saving, and borrowing activity.

    The program incentivises all customer activity with the bank and amplifies the rewards for its most active customers. Customers can also set personal finance goals (e.g., saving for retirement) to see which rewards benefit them the most.

    Conclusion

    Fintech marketing needs to catch up with the new priorities of investors in 2024. The pre-pandemic buzz is over, and investors remain cautious as regulatory scrutiny intensifies, security breaches mount up, and the market limps back into recovery.

    To win investor and consumer trust, fintech companies need to drop the growth-at-all-costs mindset and switch to a marketing philosophy of long-term profitability. This is what investors want in an unstable market, and it’s certainly what customers want from a company that handles their money.

    Unlock the full picture of your marketing efforts with Matomo’s robust features and accurate reporting. Trusted by over 1 million websites, Matomo is chosen for its compliance, accuracy, and powerful features that drive actionable insights and improve decision-making.

     Start your free 21-day trial now. No credit card required.

  • ffmpeg Audiosegment error in get audio chunks in socketIo server in python

    26 janvier 2024, par a_crszkvc30Last_NameCol

    I want to send each audio chunk every minute.
this is the test code and i want to save audiofile and audio chunk file.
then, i will combine two audio files stop button was worked correctly but with set time function is not worked in python server.
there is python server code with socketio

    


    def handle_voice(sid,data): # blob 으로 들어온 데이터 
    # BytesIO를 사용하여 메모리 상에서 오디오 데이터를 로드
    audio_segment = AudioSegment.from_file(BytesIO(data), format="webm")
    directory = "dddd"
    # 오디오 파일로 저장
    #directory = str(names_sid.get(sid))
    if not os.path.exists(directory):
        os.makedirs(directory)
 
    # 오디오 파일로 저장
    file_path = os.path.join(directory, f'{sid}.wav')
    audio_segment.export(file_path, format='wav') 
    print('오디오 파일 저장 완료')`
 


    


    and there is client

    


    &#xA;&#xA;&#xA;&#xA;    &#xA;    &#xA;    <code class="echappe-js">&lt;script src=&quot;https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.5.2/socket.io.js&quot;&gt;&lt;/script&gt;&#xA;&#xA;&#xA;&#xA;    &#xA;    
    

    &#xA;

    &#xA;

    &#xA; &lt;script&gt;&amp;#xA;        var socket = io(&amp;#x27;http://127.0.0.1:5000&amp;#x27;);&amp;#xA;        const record = document.getElementById(&quot;record&quot;)&amp;#xA;        const stop = document.getElementById(&quot;stop&quot;)&amp;#xA;        const soundClips = document.getElementById(&quot;sound-clips&quot;)&amp;#xA;        const chkHearMic = document.getElementById(&quot;chk-hear-mic&quot;)&amp;#xA;&amp;#xA;        const audioCtx = new(window.AudioContext || window.webkitAudioContext)() // 오디오 컨텍스트 정의&amp;#xA;&amp;#xA;        const analyser = audioCtx.createAnalyser()&amp;#xA;        //        const distortion = audioCtx.createWaveShaper()&amp;#xA;        //        const gainNode = audioCtx.createGain()&amp;#xA;        //        const biquadFilter = audioCtx.createBiquadFilter()&amp;#xA;&amp;#xA;        function makeSound(stream) {&amp;#xA;            const source = audioCtx.createMediaStreamSource(stream)&amp;#xA;            socket.connect()&amp;#xA;            source.connect(analyser)&amp;#xA;            //            analyser.connect(distortion)&amp;#xA;            //            distortion.connect(biquadFilter)&amp;#xA;            //            biquadFilter.connect(gainNode)&amp;#xA;            //            gainNode.connect(audioCtx.destination) // connecting the different audio graph nodes together&amp;#xA;            analyser.connect(audioCtx.destination)&amp;#xA;&amp;#xA;        }&amp;#xA;&amp;#xA;        if (navigator.mediaDevices) {&amp;#xA;            console.log(&amp;#x27;getUserMedia supported.&amp;#x27;)&amp;#xA;&amp;#xA;            const constraints = {&amp;#xA;                audio: true&amp;#xA;            }&amp;#xA;            let chunks = []&amp;#xA;&amp;#xA;            navigator.mediaDevices.getUserMedia(constraints)&amp;#xA;                .then(stream =&gt; {&amp;#xA;&amp;#xA;                    const mediaRecorder = new MediaRecorder(stream)&amp;#xA;                    &amp;#xA;                    chkHearMic.onchange = e =&gt; {&amp;#xA;                        if(e.target.checked == true) {&amp;#xA;                            audioCtx.resume()&amp;#xA;                            makeSound(stream)&amp;#xA;                        } else {&amp;#xA;                            audioCtx.suspend()&amp;#xA;                        }&amp;#xA;                    }&amp;#xA;                    &amp;#xA;                    record.onclick = () =&gt; {&amp;#xA;                        mediaRecorder.start(1000)&amp;#xA;                        console.log(mediaRecorder.state)&amp;#xA;                        console.log(&quot;recorder started&quot;)&amp;#xA;                        record.style.background = &quot;red&quot;&amp;#xA;                        record.style.color = &quot;black&quot;&amp;#xA;                    }&amp;#xA;&amp;#xA;                    stop.onclick = () =&gt; {&amp;#xA;                        mediaRecorder.stop()&amp;#xA;                        console.log(mediaRecorder.state)&amp;#xA;                        console.log(&quot;recorder stopped&quot;)&amp;#xA;                        record.style.background = &quot;&quot;&amp;#xA;                        record.style.color = &quot;&quot;&amp;#xA;                    }&amp;#xA;&amp;#xA;                    mediaRecorder.onstop = e =&gt; {&amp;#xA;                        console.log(&quot;data available after MediaRecorder.stop() called.&quot;)&amp;#xA;                        const bb = new Blob(chunks, { &amp;#x27;type&amp;#x27; : &amp;#x27;audio/wav&amp;#x27; })&amp;#xA;                        socket.emit(&amp;#x27;voice&amp;#x27;,bb)&amp;#xA;                        const clipName = prompt(&quot;오디오 파일 제목을 입력하세요.&quot;, new Date())&amp;#xA;&amp;#xA;                        const clipContainer = document.createElement(&amp;#x27;article&amp;#x27;)&amp;#xA;                        const clipLabel = document.createElement(&amp;#x27;p&amp;#x27;)&amp;#xA;                        const audio = document.createElement(&amp;#x27;audio&amp;#x27;)&amp;#xA;                        const deleteButton = document.createElement(&amp;#x27;button&amp;#x27;)&amp;#xA;&amp;#xA;                        clipContainer.classList.add(&amp;#x27;clip&amp;#x27;)&amp;#xA;                        audio.setAttribute(&amp;#x27;controls&amp;#x27;, &amp;#x27;&amp;#x27;)&amp;#xA;                        deleteButton.innerHTML = &quot;삭제&quot;&amp;#xA;                        clipLabel.innerHTML = clipName&amp;#xA;&amp;#xA;                        clipContainer.appendChild(audio)&amp;#xA;                        clipContainer.appendChild(clipLabel)&amp;#xA;                        clipContainer.appendChild(deleteButton)&amp;#xA;                        soundClips.appendChild(clipContainer)&amp;#xA;&amp;#xA;                        audio.controls = true&amp;#xA;                        const blob = new Blob(chunks, {&amp;#xA;                            &amp;#x27;type&amp;#x27;: &amp;#x27;audio/ogg codecs=opus&amp;#x27;&amp;#xA;                        })&amp;#xA;&amp;#xA;                        chunks = []&amp;#xA;                        const audioURL = URL.createObjectURL(blob)&amp;#xA;                        audio.src = audioURL&amp;#xA;                        console.log(&quot;recorder stopped&quot;)&amp;#xA;&amp;#xA;                        deleteButton.onclick = e =&gt; {&amp;#xA;                            evtTgt = e.target&amp;#xA;                            evtTgt  .parentNode.parentNode.removeChild(evtTgt.parentNode)&amp;#xA;                        }&amp;#xA;                    }&amp;#xA;&amp;#xA;                  mediaRecorder.ondataavailable = function(e) {&amp;#xA;                    chunks.push(e.data)&amp;#xA;                    if (chunks.length &gt;= 5)&amp;#xA;                    {&amp;#xA;                        const bloddb = new Blob(chunks, { &amp;#x27;type&amp;#x27; : &amp;#x27;audio/wav&amp;#x27; })&amp;#xA;                        socket.emit(&amp;#x27;voice&amp;#x27;, bloddb)&amp;#xA;                         &amp;#xA;                        chunks = []&amp;#xA;                    }&amp;#xA;                    mediaRecorder.sendData = function(buffer) {&amp;#xA;                        const bloddb = new Blob(buffer, { &amp;#x27;type&amp;#x27; : &amp;#x27;audio/wav&amp;#x27; })&amp;#xA;                        socket.emit(&amp;#x27;voice&amp;#x27;, bloddb)&amp;#xA;}&amp;#xA;};&amp;#xA;                })&amp;#xA;                .catch(err =&gt; {&amp;#xA;                    console.log(&amp;#x27;The following error occurred: &amp;#x27; &amp;#x2B; err)&amp;#xA;                })&amp;#xA;        }&amp;#xA;    &lt;/script&gt;&#xA;&#xA;

    &#xA;

    ask exception was never retrieved&#xA;future: <task finished="finished" coro="<InstrumentedAsyncServer._handle_event_internal()" defined="defined" at="at"> exception=CouldntDecodeError(&#x27;Decoding failed. ffmpeg returned error code: 3199971767\n\nOutput from ffmpeg/avlib:\n\nffmpeg version 6.1.1-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers\r\n  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)\r\n  configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint\r\n  libavutil      58. 29.100 / 58. 29.100\r\n  libavcodec     60. 31.102 / 60. 31.102\r\n  libavformat    60. 16.100 / 60. 16.100\r\n  libavdevice    60.  3.100 / 60.  3.100\r\n  libavfilter     9. 12.100 /  9. 12.100\r\n  libswscale      7.  5.100 /  7.  5.100\r\n  libswresample   4. 12.100 /  4. 12.100\r\n  libpostproc    57.  3.100 / 57.  3.100\r\n[cache @ 000001d9828efe40] Inner protocol failed to seekback end : -40\r\n[matroska,webm @ 000001d9828efa00] EBML header parsing failed\r\n[cache @ 000001d9828efe40] Statistics, cache hits:0 cache misses:3\r\n[in#0 @ 000001d9828da3c0] Error opening input: Invalid data found when processing input\r\nError opening input file cache:pipe:0.\r\nError opening input files: Invalid data found when processing input\r\n&#x27;)>&#xA;Traceback (most recent call last):&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\socketio\async_admin.py", line 276, in _handle_event_internal&#xA;    ret = await self.sio.__handle_event_internal(server, sid, eio_sid,&#xA;          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\socketio\async_server.py", line 597, in _handle_event_internal&#xA;    r = await server._trigger_event(data[0], namespace, sid, *data[1:])&#xA;        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\socketio\async_server.py", line 635, in _trigger_event&#xA;    ret = handler(*args)&#xA;          ^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\Python-Javascript-Websocket-Video-Streaming--main\poom2.py", line 153, in handle_voice&#xA;    audio_segment = AudioSegment.from_file(BytesIO(data), format="webm")&#xA;                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^&#xA;  File "f:\fastapi-socketio-wb38\.vent\Lib\site-packages\pydub\audio_segment.py", line 773, in from_file&#xA;    raise CouldntDecodeError(&#xA;pydub.exceptions.CouldntDecodeError: Decoding failed. ffmpeg returned error code: 3199971767&#xA;&#xA;Output from ffmpeg/avlib:&#xA;&#xA;ffmpeg version 6.1.1-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers&#xA;  built with gcc 12.2.0 (Rev10, Built by MSYS2 project)&#xA;  configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint&#xA;  libavutil      58. 29.100 / 58. 29.100&#xA;  libavcodec     60. 31.102 / 60. 31.102&#xA;  libavformat    60. 16.100 / 60. 16.100&#xA;  libavdevice    60.  3.100 / 60.  3.100&#xA;  libavfilter     9. 12.100 /  9. 12.100&#xA;  libswscale      7.  5.100 /  7.  5.100&#xA;  libswresample   4. 12.100 /  4. 12.100&#xA;  libpostproc    57.  3.100 / 57.  3.100&#xA;[cache @ 000001d9828efe40] Inner protocol failed to seekback end : -40&#xA;[matroska,webm @ 000001d9828efa00] EBML header parsing failed&#xA;[cache @ 000001d9828efe40] Statistics, cache hits:0 cache misses:3&#xA;[in#0 @ 000001d9828da3c0] Error opening input: Invalid data found when processing input&#xA;Error opening input file cache:pipe:0.&#xA;Error opening input files: Invalid data found when processing input&#xA;</task>

    &#xA;

    im using version of ffmpeg-6.1.1-full_build.&#xA;i dont know this error exist the stop button sent event correctly. but chunk data was not work correctly in python server.&#xA;my english was so bad. sry

    &#xA;

  • xdotool to tab to a button on a web page and use the mouse to disable a drop down menu option

    25 juin 2023, par Mash

    I have a Bash script that open a Amazon chime meeting URL in firefox, uses XDOtool to enter a meetig participant name, and tab and mouse click functions. and next uses ffmpeg to stream the video and audio output of the Amazon chime meeting to an RTMP destination.

    &#xA;

    When this is streamed, the Amazon chime web app has "More" drop down Menu. Within the Menu it has a option to disable the self view. I want to add xdotool commands to disable this self view option from the more drop down menu on the amazon chime web app page.

    &#xA;

    the Amazon chime meeting URL is - https://app.chime.aws/meetings/

    &#xA;

    Here is the Bash Script

    &#xA;

    #!/bin/bash&#xA;BROWSER_URL=${MEETING_URL}&#xA;SCREEN_WIDTH=1920&#xA;SCREEN_HEIGHT=1080&#xA;SCREEN_RESOLUTION=${SCREEN_WIDTH}x${SCREEN_HEIGHT}&#xA;CAPTURE_SCREEN_RESOLUTION=1920x1080&#xA;COLOR_DEPTH=24&#xA;X_SERVER_NUM=2&#xA;VIDEO_BITRATE=6000&#xA;VIDEO_FRAMERATE=30&#xA;VIDEO_GOP=$((VIDEO_FRAMERATE * 2))&#xA;AUDIO_BITRATE=160k&#xA;AUDIO_SAMPLERATE=44100&#xA;AUDIO_CHANNELS=2&#xA;&#xA;# Start PulseAudio server so Firefox will have somewhere to which to send audio&#xA;pulseaudio -D --exit-idle-time=-1&#xA;pacmd load-module module-virtual-sink sink_name=v1  # Load a virtual sink as `v1`&#xA;pacmd set-default-sink v1  # Set the `v1` as the default sink device&#xA;pacmd set-default-source v1.monitor  # Set the monitor of the v1 sink to be the default source&#xA;&#xA;# Start X11 virtual framebuffer so Firefox will have somewhere to draw&#xA;Xvfb :${X_SERVER_NUM} -ac -screen 0 ${SCREEN_RESOLUTION}x${COLOR_DEPTH} > /dev/null 2>&amp;1 &amp;&#xA;export DISPLAY=:${X_SERVER_NUM}.0&#xA;sleep 0.5  # Ensure this has started before moving on&#xA;&#xA;# Create a new Firefox profile for capturing preferences for this&#xA;firefox --no-remote --new-instance --createprofile "foo4 /tmp/foo4"&#xA;&#xA;# Install the OpenH264 plugin for Firefox&#xA;mkdir -p /tmp/foo4/gmp-gmpopenh264/1.8.1.1/&#xA;pushd /tmp/foo4/gmp-gmpopenh264/1.8.1.1 >&amp; /dev/null&#xA;curl -s -O http://ciscobinary.openh264.org/openh264-linux64-2e1774ab6dc6c43debb0b5b628bdf122a391d521.zip&#xA;unzip openh264-linux64-2e1774ab6dc6c43debb0b5b628bdf122a391d521.zip&#xA;rm -f openh264-linux64-2e1774ab6dc6c43debb0b5b628bdf122a391d521.zip&#xA;popd >&amp; /dev/null&#xA;&#xA;# Set the Firefox preferences to enable automatic media playing with no user&#xA;# interaction and the use of the OpenH264 plugin.&#xA;cat &lt;<eof>> /tmp/foo4/prefs.js&#xA;user_pref("media.autoplay.default", 0);&#xA;user_pref("media.autoplay.enabled.user-gestures-needed", false);&#xA;user_pref("media.navigator.permission.disabled", true);&#xA;user_pref("media.gmp-gmpopenh264.abi", "x86_64-gcc3");&#xA;user_pref("media.gmp-gmpopenh264.lastUpdate", 1571534329);&#xA;user_pref("media.gmp-gmpopenh264.version", "1.8.1.1");&#xA;user_pref("doh-rollout.doorhanger-shown", true);&#xA;EOF&#xA;&#xA;# Start Firefox browser and point it at the URL we want to capture&#xA;#&#xA;# NB: The `--width` and `--height` arguments have to be very early in the&#xA;# argument list or else only a white screen will result in the capture for some&#xA;# reason.&#xA;firefox \&#xA;  -P foo4 \&#xA;  --width ${SCREEN_WIDTH} \&#xA;  --height ${SCREEN_HEIGHT} \&#xA;  --new-instance \&#xA;  --first-startup \&#xA;  --foreground \&#xA;  --kiosk \&#xA;  --ssb \&#xA;  "${BROWSER_URL}" \&#xA;  &amp;&#xA;sleep 10  # Ensure this has started before moving on, waiting for loading the Chime web app&#xA;xdotool key Return #Select yes for the pop-up window of "Would you like to open this link with Chime app?"&#xA;sleep 3&#xA;xdotool key Escape #Close the pop-up window&#xA;sleep 3&#xA;xdotool type Livestream #Type "Livestream" on the name input field&#xA;sleep 3&#xA;xdotool key Tab #Move to "join the meeting" button&#xA;sleep 3&#xA;xdotool key Return #Click "join the meeting" button&#xA;sleep 3&#xA;xdotool key Return #Close the pop-up window once again&#xA;sleep 3&#xA;xdotool key Escape #Close the pop-up window once again&#xA;sleep 3&#xA;xdotool key Return #Click "Use system audio" setting&#xA;sleep 3&#xA;xdotool key Escape #Close warning message&#xA;sleep 3&#xA;xdotool mousemove 1 1 click 1  # Move mouse out of the way so it doesn&#x27;t trigger the "pause" overlay on the video tile  &#xA;&#xA;# Start ffmpeg to transcode the capture from the X11 framebuffer and the&#xA;# PulseAudio virtual sound device we created earlier and send that to the RTMP&#xA;# endpoint in H.264/AAC format using a FLV container format.&#xA;#&#xA;# NB: These arguments have a very specific order. Seemingly inocuous changes in&#xA;# argument order can have pretty drastic effects, so be careful when&#xA;# adding/removing/reordering arguments here.&#xA;ffmpeg \&#xA;  -hide_banner -loglevel error \&#xA;  -nostdin \&#xA;  -s ${CAPTURE_SCREEN_RESOLUTION} \&#xA;  -r ${VIDEO_FRAMERATE} \&#xA;  -draw_mouse 0 \&#xA;  -f x11grab \&#xA;    -i ${DISPLAY} \&#xA;  -f pulse \&#xA;    -ac 2 \&#xA;    -i default \&#xA;    -vf "crop=1600:980:0:1080" \&#xA;  -c:v libx264 \&#xA;    -pix_fmt yuv420p \&#xA;    -profile:v main \&#xA;    -preset slow \&#xA;    -x264opts "nal-hrd=cbr:no-scenecut" \&#xA;    -minrate ${VIDEO_BITRATE} \&#xA;    -maxrate ${VIDEO_BITRATE} \&#xA;    -g ${VIDEO_GOP} \&#xA;  -filter_complex "aresample=async=1000:min_hard_comp=0.100000:first_pts=1" \&#xA;  -async 1 \&#xA;  -c:a aac \&#xA;    -b:a ${AUDIO_BITRATE} \&#xA;    -ac ${AUDIO_CHANNELS} \&#xA;    -ar ${AUDIO_SAMPLERATE} \&#xA;  -f flv ${RTMP_URL}``&#xA;&#xA;</eof>

    &#xA;

    what i have tried so far in in the bash script

    &#xA;