
Recherche avancée
Autres articles (109)
-
Des sites réalisés avec MediaSPIP
2 mai 2011, parCette page présente quelques-uns des sites fonctionnant sous MediaSPIP.
Vous pouvez bien entendu ajouter le votre grâce au formulaire en bas de page. -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...) -
De l’upload à la vidéo finale [version standalone]
31 janvier 2010, parLe chemin d’un document audio ou vidéo dans SPIPMotion est divisé en trois étapes distinctes.
Upload et récupération d’informations de la vidéo source
Dans un premier temps, il est nécessaire de créer un article SPIP et de lui joindre le document vidéo "source".
Au moment où ce document est joint à l’article, deux actions supplémentaires au comportement normal sont exécutées : La récupération des informations techniques des flux audio et video du fichier ; La génération d’une vignette : extraction d’une (...)
Sur d’autres sites (15986)
-
avpacket : Fix error checking in packet_alloc
11 septembre 2013, par Martin Storsjöavpacket : Fix error checking in packet_alloc
Previously the wrong buffer pointer was checked, when buf
instead of *buf was checked. But checking the return value
instead is even better.Reported-by : Mateusz "j00ru" Jurczyk and Gynvael Coldwind
CC : libav-stable@libav.org
Signed-off-by : Martin Storsjö <martin@martin.st> -
ffmpeg concatenation after using drawtext filter
12 août 2016, par Sven HoskensI’m fairly new to ffmpeg, but after a few days of searching on this issue, I’ve completely hit a brick wall. Any help would be appreciated.
My use case : Our client wants to upload videos for multiple regions. Each video will be the same format, 1920x1080, mp4. For each region, they want to add a different image at the end of the video, for a few seconds. This image contains their logo, some additional info, and a variable code. They will enter this code alongside the uploaded video. The image stays the same, so is already present on the server.
So basically, I have an input video, a video of an image, and a small code. I need to add this code to the video of the image (in a predefined position), and then I need to add the resulting video to the end of the input video. Once that is complete, I just need to output the video in 1920x1080 and in 1024x576.I have tried several things, but the concatenation step always fails with the manipulated video’s.
Attempt 1
In my first attempt, I used ffmpeg to create a video from an image, and add the text in the designated area.
ffmpeg -y -f lavfi -i image.png -r 30 -t 10 -pix_fmt yuv420p -map 0:v -vf drawtext="fontfile=HelveticaNeue.dfont: text='GLNS/TEST/1234b': fontcolor=black: fontsize=20: box=1: boxcolor=white: boxborderw=7: x=179: y=805" imageVideo.mp4
This command creates a .mp4 video of the correct size, with a duration of 10 seconds, and adds the text ’GLNS/TEST/1234b’ in the correct location.
Next, I use the following command to concatenate the two videos. Both have the same resolution and codec.
ffmpeg -f concat -safe 0 -i config.txt -vf scale=1920:1080 outputHD.mp4 -vf scale=1024:576 outputSD.mp4
config.txt contains following :
file my_input_file.mp4
file ImageVideo.mp4This concatenation works with regular videos. However, when I use it with ImageVideo.mp4 (the one created by the first command) I get this error log :
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f86dc924600] Auto-inserting h264_mp4toannexb bitstream filtereed=0.509x
[aac @ 0x7f86dc019e00] Number of bands (31) exceeds limit (5).
Error while decoding stream #0:1: Invalid data found when processing input
[aac @ 0x7f86dc019e00] Number of bands (27) exceeds limit (8).
Error while decoding stream #0:1: Invalid data found when processing input
[h264 @ 0x7f86dd857200] Error splitting the input into NAL units.
[h264 @ 0x7f86dd829400] Invalid NAL unit size.
[h264 @ 0x7f86dd829400] Error splitting the input into NAL units.
[aac @ 0x7f86dc019e00] Number of bands (10) exceeds limit (1).
Error while decoding stream #0:1: Invalid data found when processing input
[h264 @ 0x7f86dd816800] Invalid NAL unit size.
[h264 @ 0x7f86dd816800] Error splitting the input into NAL units.
[aac @ 0x7f86dc019e00] Number of bands (24) exceeds limit (1).
Error while decoding stream #0:1: Invalid data found when processing input
#this goes on for a few hundred linesThe resulting output is identical to the input video, but does not contain the desired image video at the end.
Attempt 2
Since the above attempt didn’t work, I tried concatenating a video I let our designer make of the image with Adobe After Effects. This video was also saved as a .mp4 with the H264 codec. If I concatenate the input video and this one, I get a correct result. However, as soon as I add the code in the designated area with this command :
ffmpeg -i new_image_video.mp4 -vf drawtext="fontfile=HelveticaNeue.dfont: text='GLNS/TEST/1234b': fontcolor=black: fontsize=20: box=1: boxcolor=white: boxborderw=7: x=179: y=805" -c:v libx264 imageVideo.mp4
I get this error :
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7ff94c800000] Auto-inserting h264_mp4toannexb bitstream filter97x
[h264 @ 0x7ff94b053800] top block unavailable for requested intra mode -1
[h264 @ 0x7ff94b053800] error while decoding MB 0 0, bytestream 49526
[h264 @ 0x7ff94b053e00] number of reference frames (1+3) exceeds max (3; probably corrupt input), discarding one
[h264 @ 0x7ff94b053e00] chroma_log2_weight_denom 28 is out of range
[h264 @ 0x7ff94b053e00] illegal long ref in memory management control operation 2
[h264 @ 0x7ff94b053e00] cabac_init_idc 32 overflow
[h264 @ 0x7ff94b053e00] decode_slice_header error
[h264 @ 0x7ff94b053e00] no frame!
[h264 @ 0x7ff94b053800] concealing 8160 DC, 8160 AC, 8160 MV errors in I frame
[h264 @ 0x7ff94b072a00] reference overflow 22 > 15 or 0 > 15
[h264 @ 0x7ff94b072a00] decode_slice_header error
[h264 @ 0x7ff94b072a00] no frame!
[h264 @ 0x7ff94b01a400] illegal modification_of_pic_nums_idc 20
[h264 @ 0x7ff94b01a400] decode_slice_header error
[h264 @ 0x7ff94b01a400] no frame!
[h264 @ 0x7ff94b01aa00] illegal modification_of_pic_nums_idc 20
[h264 @ 0x7ff94b01aa00] decode_slice_header error
[h264 @ 0x7ff94b01aa00] no frame!
Error while decoding stream #0:0: Invalid data found when processing input
[h264 @ 0x7ff94b053800] deblocking_filter_idc 8 out of range
[h264 @ 0x7ff94b053800] decode_slice_header error
[h264 @ 0x7ff94b053800] no frame!
Error while decoding stream #0:0: Invalid data found when processing input
[h264 @ 0x7ff94b053e00] illegal memory management control operation 8
[h264 @ 0x7ff94b053e00] co located POCs unavailable
[h264 @ 0x7ff94b053e00] error while decoding MB 2 0, bytestream -35
[h264 @ 0x7ff94b053e00] concealing 8160 DC, 8160 AC, 8160 MV errors in B frame
[h264 @ 0x7ff94b072a00] number of reference frames (1+3) exceeds max (3; probably corrupt input), discarding one
# this goes on for a while...
[h264 @ 0x7ff94b01a400] concealing 4962 DC, 4962 AC, 4962 MV errors in B frame
Error while decoding stream #0:0: Invalid data found when processing input
frame= 2553 fps= 17 q=-1.0 Lsize= 26995kB time=00:01:42.16 bitrate=2164.6kbits/s dup=0 drop=60 speed=0.697x
video:25258kB audio:1661kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.285236%
[libx264 @ 0x7ff94b810400] frame I:35 Avg QP:17.45 size: 55070
[libx264 @ 0x7ff94b810400] frame P:711 Avg QP:19.73 size: 18712
[libx264 @ 0x7ff94b810400] frame B:1807 Avg QP:21.53 size: 5884
[libx264 @ 0x7ff94b810400] consecutive B-frames: 3.4% 5.0% 4.9% 86.6%
[libx264 @ 0x7ff94b810400] mb I I16..4: 38.2% 49.3% 12.5%
[libx264 @ 0x7ff94b810400] mb P I16..4: 12.4% 14.0% 1.0% P16..4: 29.6% 4.8% 1.9% 0.0% 0.0% skip:36.2%
[libx264 @ 0x7ff94b810400] mb B I16..4: 1.5% 1.2% 0.1% B16..8: 27.3% 1.6% 0.1% direct: 1.8% skip:66.4% L0:45.8% L1:51.4% BI: 2.8%
[libx264 @ 0x7ff94b810400] 8x8 transform intra:49.5% inter:85.4%
[libx264 @ 0x7ff94b810400] coded y,uvDC,uvAC intra: 21.2% 22.3% 2.5% inter: 4.6% 7.0% 0.0%
[libx264 @ 0x7ff94b810400] i16 v,h,dc,p: 23% 26% 10% 41%
[libx264 @ 0x7ff94b810400] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 19% 35% 3% 3% 3% 3% 3% 2%
[libx264 @ 0x7ff94b810400] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 20% 16% 5% 7% 6% 5% 5% 4%
[libx264 @ 0x7ff94b810400] i8c dc,h,v,p: 67% 16% 15% 2%
[libx264 @ 0x7ff94b810400] Weighted P-Frames: Y:7.3% UV:4.2%
[libx264 @ 0x7ff94b810400] ref P L0: 66.3% 8.7% 17.9% 7.0% 0.1%
[libx264 @ 0x7ff94b810400] ref B L0: 88.2% 10.1% 1.7%
[libx264 @ 0x7ff94b810400] ref B L1: 94.9% 5.1%
[libx264 @ 0x7ff94b810400] kb/s:2026.12
[aac @ 0x7ff94b072400] Qavg: 635.626The resulting output is identical to the input video, but does not contain the desired image video at the end.
One thing I have noticed : When I inspect the video files on mac (Get info) they always contain these lines at ’More info’ :
Dimensions: 1920 x 1080
Codecs: H.264, AAC
Color profile: HD(1-1-1)
Duration: 01:42
Audio channels: 2
Last opened: Today 11:02However, the video’s which pass through the drawtext filter have this :
Dimensions: 1920 x 1080
Codecs: AAC, H.264
Duration: 00:10
Audio channels: 2
Last opened: Today 11:07As you can see, there is no color profile entry, and the codecs have switched places. I assume this is related to my issue, but I can’t seem to find a fix for it.
PS : The application will run in a php environment (Symfony). I noticed the concat command wasn’t available in the Symfony bundle for ffmpeg, so I’m using the regular terminal commands. I’ll execute these using php.
EDIT
Attempt 3On advise of a coworker, I tried converting the video to .avi and reconverting to .mp4, in the hopes this would lose any corrupted or extra info included by the drawtext filter. This spits out a completely different error.
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7f812413da00] Auto-inserting h264_mp4toannexb bitstream filtereed=0.516x
[concat @ 0x7f8124009a00] DTS 1569260 < 2551000 out of order
[h264 @ 0x7f8124846800] left block unavailable for requested intra4x4 mode -1
[h264 @ 0x7f8124846800] error while decoding MB 0 0, bytestream 47919
[h264 @ 0x7f8124846800] concealing 8160 DC, 8160 AC, 8160 MV errors in I frame
[aac @ 0x7f8125809a00] Queue input is backward in time
[aac @ 0x7f8125815a00] Queue input is backward in time
[h264 @ 0x7f8124846e00] number of reference frames (1+3) exceeds max (3; probably corrupt input), discarding one
[h264 @ 0x7f8124846e00] chroma_log2_weight_denom 26 is out of range
[h264 @ 0x7f8124846e00] deblocking_filter_idc 32 out of range
[h264 @ 0x7f8124846e00] decode_slice_header error
[h264 @ 0x7f8124846e00] no frame!
[mp4 @ 0x7f8124802200] Non-monotonous DTS in output stream 0:1; previous: 4902912, current: 4505491; changing to 4902913. This may result in incorrect timestamps in the output file.
[mp4 @ 0x7f8125813000] Non-monotonous DTS in output stream 1:1; previous: 4902912, current: 4505491; changing to 4902913. This may result in incorrect timestamps in the output file.
[h264 @ 0x7f8124803400] reference overflow 20 > 15 or 0 > 15
[h264 @ 0x7f8124803400] decode_slice_header error
[h264 @ 0x7f8124803400] no frame!
[mp4 @ 0x7f8124802200] Non-monotonous DTS in output stream 0:1; previous: 4902913, current: 4506515; changing to 4902914. This may result in incorrect timestamps in the output file.
[mp4 @ 0x7f8125813000] Non-monotonous DTS in output stream 1:1; previous: 4902913, current: 4506515; changing to 4902914. This may result in incorrect timestamps in the output file.
[mp4 @ 0x7f8124802200] Non-monotonous DTS in output stream 0:1; previous: 4902914, current: 4507539; changing to 4902915. This may result in incorrect timestamps in the output file.
[mp4 @ 0x7f8125813000] Non-monotonous DTS in output stream 1:1; previous: 4902914, current: 4507539; changing to 4902915. This may result in incorrect timestamps in the output file.
# Again, this continues for quite a while. -
Converting a voice recording into an mp3
21 juillet 2023, par Raphael MFor a vue.js messaging project, I'm using the wavesurfer.js library to record voice messages. However Google chrome gives me an audio/webm blob and Safari gives me an audio/mp4 blob.


I'm trying to find a solution to transcode the blob into audio/mp3. I've tried several methods, including ffmpeg. However, ffmpeg gives me an error when compiling "npm run dev" : "Can't resolve '/node_modules/@ffmpeg/core/dist/ffmpeg-core.js'".


"@ffmpeg/core": "^0.11.0",
"@ffmpeg/ffmpeg": "^0.11.6"



I tried to downgrade ffmpeg


"@ffmpeg/core": "^0.9.0",
"@ffmpeg/ffmpeg": "^0.9.8"



I no longer get the error message when compiling, but when I want to convert my audio stream, the console displays a problem with SharedBuffer : "Uncaught (in promise) ReferenceError : SharedArrayBuffer is not defined".


Here's my complete code below.
Is there a reliable way of transcoding the audio stream into mp3 ?


Can you give me an example ?


Thanks


<template>
 <div class="left-panel">
 <header class="radial-blue">
 <div class="container">
 <h1 class="mb-30">Posez votre première question à nos thérapeutes</h1>
 <p><b>Attention</b>, vous disposez seulement de 2 messages. Veillez à les utiliser de manière judicieuse !</p>
 <div class="available-messages">
 <div class="item disabled">
 <span>Message 1</span>
 </div>
 <div class="item">
 <span>Message 2</span>
 </div>
 </div>
 </div>
 </header>
 </div>
 <div class="right-panel">
 <div class="messagerie bg-light">
 <messaging ref="messagingComponent"></messaging>
 <footer>
 <button type="button"><img src="http://stackoverflow.com/assets/backoffice/images/record-start.svg" style='max-width: 300px; max-height: 300px' /></button>
 <div class="loading-animation">
 <img src="http://stackoverflow.com/assets/backoffice/images/record-loading.svg" style='max-width: 300px; max-height: 300px' />
 </div>
 <button type="button"><img src="http://stackoverflow.com/assets/backoffice/images/record-stop.svg" style='max-width: 300px; max-height: 300px' /></button>
 <div class="textarea gradient text-dark">
 <textarea placeholder="Posez votre question"></textarea>
 </div>
 <div class="loading-text">Chargement de votre microphone en cours...</div>
 <div class="loading-text">Envoi de votre message en cours...</div>
 <div ref="visualizer"></div>
 <button type="button"><img src="http://stackoverflow.com/assets/backoffice/images/send.svg" style='max-width: 300px; max-height: 300px' /></button>
 <div>
 {{ formatTimer() }}
 </div>
 </footer>
 </div>
 </div>
</template>

<code class="echappe-js"><script>&#xA;import Messaging from "./Messaging.vue";&#xA;import { createFFmpeg, fetchFile } from &#x27;@ffmpeg/ffmpeg&#x27;;&#xA;&#xA;export default {&#xA; data() {&#xA; return {&#xA; isMicrophoneLoading: false,&#xA; isSubmitLoading: false,&#xA; isMobile: false,&#xA; isMessagerie: false,&#xA; isRecording: false,&#xA; audioUrl: &#x27;&#x27;,&#xA; messageText: &#x27;&#x27;,&#xA; message:null,&#xA; wavesurfer: null,&#xA; access:(this.isMobile?&#x27;denied&#x27;:&#x27;granted&#x27;),&#xA; maxMinutes: 5,&#xA; orangeTimer: 3,&#xA; redTimer: 4,&#xA; timer: 0,&#xA; timerInterval: null,&#xA; ffmpeg: null,&#xA; };&#xA; },&#xA; components: {&#xA; Messaging,&#xA; },&#xA; mounted() {&#xA; this.checkScreenSize();&#xA; window.addEventListener(&#x27;resize&#x27;, this.checkScreenSize);&#xA;&#xA; if(!this.isMobile)&#xA; {&#xA; this.$moment.locale(&#x27;fr&#x27;);&#xA; window.addEventListener(&#x27;beforeunload&#x27;, (event) => {&#xA; if (this.isMessagerie) {&#xA; event.preventDefault();&#xA; event.returnValue = &#x27;&#x27;;&#xA; }&#xA; });&#xA;&#xA; this.initializeWaveSurfer();&#xA; }&#xA; },&#xA; beforeUnmount() {&#xA; window.removeEventListener(&#x27;resize&#x27;, this.checkScreenSize);&#xA; },&#xA; methods: {&#xA; checkScreenSize() {&#xA; this.isMobile = window.innerWidth < 1200;&#xA;&#xA; const windowHeight = window.innerHeight;&#xA; const navbarHeight = this.$navbarHeight;&#xA; let padding = parseInt(navbarHeight &#x2B;181);&#xA;&#xA; const messageListHeight = windowHeight - padding;&#xA; this.$refs.messagingComponent.$refs.messageList.style.height = messageListHeight &#x2B; &#x27;px&#x27;;&#xA; },&#xA; showMessagerie() {&#xA; this.isMessagerie = true;&#xA; this.$refs.messagingComponent.scrollToBottom();&#xA; },&#xA; checkMicrophoneAccess() {&#xA; if (navigator.mediaDevices &amp;&amp; navigator.mediaDevices.getUserMedia) {&#xA;&#xA; return navigator.mediaDevices.getUserMedia({audio: true})&#xA; .then(function (stream) {&#xA; stream.getTracks().forEach(function (track) {&#xA; track.stop();&#xA; });&#xA; return true;&#xA; })&#xA; .catch(function (error) {&#xA; console.error(&#x27;Erreur lors de la demande d\&#x27;acc&#xE8;s au microphone:&#x27;, error);&#xA; return false;&#xA; });&#xA; } else {&#xA; console.error(&#x27;getUserMedia n\&#x27;est pas support&#xE9; par votre navigateur.&#x27;);&#xA; return false;&#xA; }&#xA; },&#xA; initializeWaveSurfer() {&#xA; this.wavesurfer = this.$wavesurfer.create({&#xA; container: &#x27;#visualizer&#x27;,&#xA; barWidth: 3,&#xA; barHeight: 1.5,&#xA; height: 46,&#xA; responsive: true,&#xA; waveColor: &#x27;rgba(108,115,202,0.3)&#x27;,&#xA; progressColor: &#x27;rgba(108,115,202,1)&#x27;,&#xA; cursorColor: &#x27;transparent&#x27;&#xA; });&#xA;&#xA; this.record = this.wavesurfer.registerPlugin(this.$recordPlugin.create());&#xA; },&#xA; startRecording() {&#xA; const _this = this;&#xA; this.isMicrophoneLoading = true;&#xA;&#xA; setTimeout(() =>&#xA; {&#xA; _this.checkMicrophoneAccess().then(function (accessible)&#xA; {&#xA; if (accessible) {&#xA; _this.record.startRecording();&#xA;&#xA; _this.record.once(&#x27;startRecording&#x27;, () => {&#xA; _this.isMicrophoneLoading = false;&#xA; _this.isRecording = true;&#xA; _this.updateChildMessage( &#x27;server&#x27;, &#x27;Allez-y ! Vous pouvez enregistrer votre message audio maintenant. La dur&#xE9;e maximale autoris&#xE9;e pour votre enregistrement est de 5 minutes.&#x27;, &#x27;text&#x27;, &#x27;&#x27;, &#x27;Message automatique&#x27;);&#xA; _this.startTimer();&#xA; });&#xA; } else {&#xA; _this.isRecording = false;&#xA; _this.isMicrophoneLoading = false;&#xA; _this.$swal.fire({&#xA; title: &#x27;Microphone non d&#xE9;tect&#xE9;&#x27;,&#xA; html: &#x27;<p>Le microphone de votre appareil est inaccessible ou l\&#x27;acc&#xE8;s a &#xE9;t&#xE9; refus&#xE9;.</p><p>Merci de v&#xE9;rifier les param&#xE8;tres de votre navigateur afin de v&#xE9;rifier les autorisations de votre microphone.</p>&#x27;,&#xA; footer: &#x27;<a href='http://stackoverflow.com/contact'>Vous avez besoin d\&#x27;aide ?</a>&#x27;,&#xA; });&#xA; }&#xA; });&#xA; }, 100);&#xA; },&#xA; stopRecording() {&#xA; this.stopTimer();&#xA; this.isRecording = false;&#xA; this.isSubmitLoading = true;&#xA; this.record.stopRecording();&#xA;&#xA; this.record.once(&#x27;stopRecording&#x27;, () => {&#xA; const blobUrl = this.record.getRecordedUrl();&#xA; fetch(blobUrl).then(response => response.blob()).then(blob => {&#xA; this.uploadAudio(blob);&#xA; });&#xA; });&#xA; },&#xA; startTimer() {&#xA; this.timerInterval = setInterval(() => {&#xA; this.timer&#x2B;&#x2B;;&#xA; if (this.timer === this.maxMinutes * 60) {&#xA; this.stopRecording();&#xA; }&#xA; }, 1000);&#xA; },&#xA; stopTimer() {&#xA; clearInterval(this.timerInterval);&#xA; this.timer = 0;&#xA; },&#xA; formatTimer() {&#xA; const minutes = Math.floor(this.timer / 60);&#xA; const seconds = this.timer % 60;&#xA; const formattedMinutes = minutes < 10 ? `0${minutes}` : minutes;&#xA; const formattedSeconds = seconds < 10 ? `0${seconds}` : seconds;&#xA; return `${formattedMinutes}:${formattedSeconds}`;&#xA; },&#xA; async uploadAudio(blob)&#xA; {&#xA; const format = blob.type === &#x27;audio/webm&#x27; ? &#x27;webm&#x27; : &#x27;mp4&#x27;;&#xA;&#xA; // Convert the blob to MP3&#xA; const mp3Blob = await this.convertToMp3(blob, format);&#xA;&#xA; const s3 = new this.$AWS.S3({&#xA; accessKeyId: &#x27;xxx&#x27;,&#xA; secretAccessKey: &#x27;xxx&#x27;,&#xA; region: &#x27;eu-west-1&#x27;&#xA; });&#xA;&#xA; var currentDate = new Date();&#xA; var filename = currentDate.getDate().toString() &#x2B; &#x27;-&#x27; &#x2B; currentDate.getMonth().toString() &#x2B; &#x27;-&#x27; &#x2B; currentDate.getFullYear().toString() &#x2B; &#x27;--&#x27; &#x2B; currentDate.getHours().toString() &#x2B; &#x27;-&#x27; &#x2B; currentDate.getMinutes().toString() &#x2B; &#x27;.mp4&#x27;;&#xA;&#xA; const params = {&#xA; Bucket: &#x27;xxx/audio&#x27;,&#xA; Key: filename,&#xA; Body: mp3Blob,&#xA; ACL: &#x27;public-read&#x27;,&#xA; ContentType: &#x27;audio/mp3&#x27;&#xA; }&#xA;&#xA; s3.upload(params, (err, data) => {&#xA; if (err) {&#xA; console.error(&#x27;Error uploading audio:&#x27;, err)&#xA; } else {&#xA; const currentDate = this.$moment();&#xA; const timestamp = currentDate.format(&#x27;dddd DD MMMM YYYY HH:mm&#x27;);&#xA;&#xA; this.updateChildMessage( &#x27;client&#x27;, &#x27;&#x27;, &#x27;audio&#x27;, mp3Blob, timestamp);&#xA; this.isSubmitLoading = false;&#xA; }&#xA; });&#xA; },&#xA; async convertToMp3(blob, format) {&#xA; const ffmpeg = createFFmpeg({ log: true });&#xA; await ffmpeg.load();&#xA;&#xA; const inputPath = &#x27;input.&#x27; &#x2B; format;&#xA; const outputPath = &#x27;output.mp3&#x27;;&#xA;&#xA; ffmpeg.FS(&#x27;writeFile&#x27;, inputPath, await fetchFile(blob));&#xA;&#xA; await ffmpeg.run(&#x27;-i&#x27;, inputPath, &#x27;-acodec&#x27;, &#x27;libmp3lame&#x27;, outputPath);&#xA;&#xA; const mp3Data = ffmpeg.FS(&#x27;readFile&#x27;, outputPath);&#xA; const mp3Blob = new Blob([mp3Data.buffer], { type: &#x27;audio/mp3&#x27; });&#xA;&#xA; ffmpeg.FS(&#x27;unlink&#x27;, inputPath);&#xA; ffmpeg.FS(&#x27;unlink&#x27;, outputPath);&#xA;&#xA; return mp3Blob;&#xA; },&#xA; sendMessage() {&#xA; this.isSubmitLoading = true;&#xA; if (this.messageText.trim() !== &#x27;&#x27;) {&#xA; const emmet = &#x27;client&#x27;;&#xA; const text = this.escapeHTML(this.messageText)&#xA; .replace(/\n/g, &#x27;<br>&#x27;);&#xA;&#xA; const currentDate = this.$moment();&#xA; const timestamp = currentDate.format(&#x27;dddd DD MMMM YYYY HH:mm&#x27;);&#xA;&#xA; this.$nextTick(() => {&#xA; this.messageText = &#x27;&#x27;;&#xA;&#xA; const textarea = document.getElementById(&#x27;messageTextarea&#x27;);&#xA; if (textarea) {&#xA; textarea.scrollTop = 0;&#xA; textarea.scrollLeft = 0;&#xA; }&#xA; });&#xA;&#xA; this.updateChildMessage(emmet, text, &#x27;text&#x27;, &#x27;&#x27;, timestamp);&#xA; this.isSubmitLoading = false;&#xA; }&#xA; },&#xA; escapeHTML(text) {&#xA; const map = {&#xA; &#x27;&amp;&#x27;: &#x27;&amp;amp;&#x27;,&#xA; &#x27;<&#x27;: &#x27;&amp;lt;&#x27;,&#xA; &#x27;>&#x27;: &#x27;&amp;gt;&#x27;,&#xA; &#x27;"&#x27;: &#x27;&amp;quot;&#x27;,&#xA; "&#x27;": &#x27;&amp;#039;&#x27;,&#xA; "`": &#x27;&amp;#x60;&#x27;,&#xA; "/": &#x27;&amp;#x2F;&#x27;&#xA; };&#xA; return text.replace(/[&amp;<>"&#x27;`/]/g, (match) => map[match]);&#xA; },&#xA; updateChildMessage(emmet, text, type, blob, timestamp) {&#xA; const newMessage = {&#xA; id: this.$refs.messagingComponent.lastMessageId &#x2B; 1,&#xA; emmet: emmet,&#xA; text: text,&#xA; type: type,&#xA; blob: blob,&#xA; timestamp: timestamp&#xA; };&#xA;&#xA; this.$refs.messagingComponent.updateMessages(newMessage);&#xA; }&#xA; },&#xA;};&#xA;</script>