
Recherche avancée
Autres articles (82)
-
Participer à sa traduction
10 avril 2011Vous pouvez nous aider à améliorer les locutions utilisées dans le logiciel ou à traduire celui-ci dans n’importe qu’elle nouvelle langue permettant sa diffusion à de nouvelles communautés linguistiques.
Pour ce faire, on utilise l’interface de traduction de SPIP où l’ensemble des modules de langue de MediaSPIP sont à disposition. ll vous suffit de vous inscrire sur la liste de discussion des traducteurs pour demander plus d’informations.
Actuellement MediaSPIP n’est disponible qu’en français et (...) -
MediaSPIP v0.2
21 juin 2013, parMediaSPIP 0.2 est la première version de MediaSPIP stable.
Sa date de sortie officielle est le 21 juin 2013 et est annoncée ici.
Le fichier zip ici présent contient uniquement les sources de MediaSPIP en version standalone.
Comme pour la version précédente, il est nécessaire d’installer manuellement l’ensemble des dépendances logicielles sur le serveur.
Si vous souhaitez utiliser cette archive pour une installation en mode ferme, il vous faudra également procéder à d’autres modifications (...) -
Mise à disposition des fichiers
14 avril 2011, parPar défaut, lors de son initialisation, MediaSPIP ne permet pas aux visiteurs de télécharger les fichiers qu’ils soient originaux ou le résultat de leur transformation ou encodage. Il permet uniquement de les visualiser.
Cependant, il est possible et facile d’autoriser les visiteurs à avoir accès à ces documents et ce sous différentes formes.
Tout cela se passe dans la page de configuration du squelette. Il vous faut aller dans l’espace d’administration du canal, et choisir dans la navigation (...)
Sur d’autres sites (16631)
-
Video encoding task not working with Django Celery Redis FFMPEG and GraphQL
18 juin 2023, par phanioI'm having a hard time trying to understand how is this FFMPEG encoding works while using Django, Celery, Redis, GraphQL and Docker too.


I have this video / courses platform project and want I'm trying to do using FFMPEG, Celery and Redis is to create different video resolutions so I can display them the way Youtube does inside the videoplayer ( the videoplayer is handled in frontend by Nextjs and Apollo Client ), now on the backend I've just learned that in order to use properly the FFMPEG to resize the oridinal video size, I need to use Celery and Redis to perform asyncronus tasks. I've found a few older posts here on stackoverflow and google, but is not quite enough info for someone who is using the ffmpeg and clery and redis for the first time ( I've started already step by step and created that example that adds two numbers together with celery, that works well ). Now I'm not sure what is wrong with my code, because first of all I'm not really sure where should I trigger the task from, I mean from which file, because at the end of the task I want to send the data through api using GrapQL Strawberry.


This is what I've tried by now :


So first things first my project structure looks like this


- backend #root directory
 --- backend
 -- __init__.py
 -- celery.py
 -- settings.py
 -- urls.py
 etc..

 --- static
 -- videos

 --- video
 -- models.py
 -- schema.py
 -- tasks.py
 -- types.py
 etc..

 --- .env

 --- db.sqlite3

 --- docker-compose.yml

 --- Dockerfile

 --- manage.py

 --- requirements.txt



here is my settings.py file :


from pathlib import Path
import os

# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent

DEBUG = True

ALLOWED_HOSTS=["localhost", "0.0.0.0", "127.0.0.1"]

DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'


# Application definition

INSTALLED_APPS = [
 "corsheaders",
 'django.contrib.admin',
 'django.contrib.auth',
 'django.contrib.contenttypes',
 'django.contrib.sessions',
 'django.contrib.messages',
 'django.contrib.staticfiles',

 "strawberry.django",
 "video"
]

etc...

STATIC_URL = '/static/'
MEDIA_URL = '/videos/'

STATICFILES_DIRS = [
 BASE_DIR / 'static',
 # BASE_DIR / 'frontend/build/static',
]

MEDIA_ROOT = BASE_DIR / 'static/videos'

STATIC_ROOT = BASE_DIR / 'staticfiles'

STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'

CORS_ALLOW_ALL_ORIGINS = True


CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'

# REDIS CACHE
CACHES = {
 "default": {
 "BACKEND": "django_redis.cache.RedisCache",
 "LOCATION": f"redis://127.0.0.1:6379/1",
 "OPTIONS": {
 "CLIENT_CLASS": "django_redis.client.DefaultClient",
 },
 }
}

# Docker
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER", "redis://redis:6379/0")
CELERY_RESULT_BACKEND = os.environ.get("CELERY_BROKER", "redis://redis:6379/0")



This is my main urls.py file :


from django.contrib import admin
from django.conf import settings
from django.conf.urls.static import static
from django.urls import path
from django.urls.conf import include
from strawberry.django.views import GraphQLView

from video.schema import schema

urlpatterns = [
 path('admin/', admin.site.urls),
 path("graphql", GraphQLView.as_view(schema=schema)),
]

if settings.DEBUG:
 urlpatterns += static(settings.MEDIA_URL,
 document_root=settings.MEDIA_ROOT)
 urlpatterns += static(settings.STATIC_URL,
 document_root=settings.STATIC_ROOT)



This is my celery.py file :


from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
from django.conf import settings

os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'backend.settings')

backend = Celery('backend')

backend.config_from_object('django.conf:settings', namespace="CELERY")

backend.autodiscover_tasks()

@backend.task(bind=True)
def debug_task(self):
 print('Request: {0!r}'.format(self.request))



This is my init.py file :


from .celery import backend as celery_backend

__all__ = ('celery_backend',)



This is my Dockerfile :


FROM python:3
ENV PYTHONUNBUFFERED=1

WORKDIR /usr/src/backend

RUN apt-get -y update
RUN apt-get -y upgrade
RUN apt-get install -y ffmpeg

COPY requirements.txt ./
RUN pip install -r requirements.txt



This is my docker-compose.yml file :


version: "3.8"

services:
 django:
 build: .
 container_name: django
 command: python manage.py runserver 0.0.0.0:8000
 volumes:
 - .:/usr/src/backend/
 ports:
 - "8000:8000"
 environment:
 - DEBUG=1
 - DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
 - CELERY_BROKER=redis://redis:6379/0
 - CELERY_BACKEND=redis://redis:6379/0
 depends_on:
 - pgdb
 - redis

 celery:
 build: .
 command: celery -A backend worker -l INFO
 volumes:
 - .:/usr/src/backend
 depends_on:
 - django
 - redis

 pgdb:
 image: postgres
 container_name: pgdb
 environment:
 - POSTGRES_DB=postgres
 - POSTGRES_USER=postgres
 - POSTGRES_PASSWORD=postgres
 volumes:
 - pgdata:/var/lib/postgresql/data/

 redis:
 image: "redis:alpine"

volumes:
 pgdata:



And now inside my video app folder :


My models.py file :


- 

- here I've created separated fields for all resolution sizes, from video_file_2k to video_file_144, I was thinking that maybe after the process of the encoding this will populate those fields..




from django.db import models
from django.urls import reverse


class Video(models.Model):
 video_id = models.AutoField(primary_key=True, editable=False)
 slug = models.SlugField(max_length=255)
 title = models.CharField(max_length=150, blank=True, null=True)
 description = models.TextField(blank=True, null=True)
 video_file = models.FileField(null=False, blank=False)
 video_file_2k = models.FileField(null=True, blank=True)
 video_file_fullhd = models.FileField(null=True, blank=True)
 video_file_hd = models.FileField(null=True, blank=True)
 video_file_480 = models.FileField(null=True, blank=True)
 video_file_360 = models.FileField(null=True, blank=True)
 video_file_240 = models.FileField(null=True, blank=True)
 video_file_144 = models.FileField(null=True, blank=True)
 category = models.CharField(max_length=64, blank=False, null=False)
 created_at = models.DateTimeField(
 ("Created at"), auto_now_add=True, editable=False)
 updated_at = models.DateTimeField(("Updated at"), auto_now=True)

 class Meta:
 ordering = ("-created_at",)
 verbose_name = ("Video")
 verbose_name_plural = ("Videos")

 def get_absolute_url(self):
 return reverse("store:video_detail", args=[self.slug])

 def __str__(self):
 return self.title



This is my schema.py file :


import strawberry
from strawberry.file_uploads import Upload
from typing import List
from .types import VideoType
from .models import Video
from .tasks import task_video_encoding_1080p, task_video_encoding_720p


@strawberry.type
class Query:
 @strawberry.field
 def videos(self, category: str = None) -> List[VideoType]:
 if category:
 videos = Video.objects.filter(category=category)
 return videos
 return Video.objects.all()

 @strawberry.field
 def video(self, slug: str) -> VideoType:
 if slug == slug:
 video = Video.objects.get(slug=slug)
 return video

 @strawberry.field
 def video_by_id(self, video_id: int) -> VideoType:
 if video_id == video_id:
 video = Video.objects.get(pk=video_id)

 # Here I've tried to trigger my tasks, when I visited 0.0.0.0:8000/graphql url
 # and I was querying for a video by it's id , then I've got the error from celery 
 task_video_encoding_1080p.delay(video_id)
 task_video_encoding_720p.delay(video_id)

 return video


@strawberry.type
class Mutation:
 @strawberry.field
 def create_video(self, slug: str, title: str, description: str, video_file: Upload, video_file_2k: str, video_file_fullhd: str, video_file_hd: str, video_file_480: str, video_file_360: str, video_file_240: str, video_file_144: str, category: str) -> VideoType:

 video = Video(slug=slug, title=title, description=description,
 video_file=video_file, video_file_2k=video_file_2k, video_file_fullhd=video_file_fullhd, video_file_hd=video_file_hd, video_file_480=video_file_480, video_file_360=video_file_360, video_file_240=video_file_240, video_file_144=video_file_144,category=category)
 
 video.save()
 return video

 @strawberry.field
 def update_video(self, video_id: int, slug: str, title: str, description: str, video_file: str, category: str) -> VideoType:
 video = Video.objects.get(video_id=video_id)
 video.slug = slug
 video.title = title
 video.description = description
 video.video_file = video_file
 video.category = category
 video.save()
 return video

 @strawberry.field
 def delete_video(self, video_id: int) -> bool:
 video = Video.objects.get(video_id=video_id)
 video.delete
 return True


schema = strawberry.Schema(query=Query, mutation=Mutation)



This is my types.py file ( strawberry graphql related ) :


import strawberry

from .models import Video


@strawberry.django.type(Video)
class VideoType:
 video_id: int
 slug: str
 title: str
 description: str
 video_file: str
 video_file_2k: str
 video_file_fullhd: str
 video_file_hd: str
 video_file_480: str
 video_file_360: str
 video_file_240: str
 video_file_144: str
 category: str



And this is my tasks.py file :


from __future__ import absolute_import, unicode_literals
import os, subprocess
from django.conf import settings
from django.core.exceptions import ValidationError
from celery import shared_task
from celery.utils.log import get_task_logger
from .models import Video
FFMPEG_PATH = os.environ["IMAGEIO_FFMPEG_EXE"] = "/opt/homebrew/Cellar/ffmpeg/6.0/bin/ffmpeg"

logger = get_task_logger(__name__)


# CELERY TASKS
@shared_task
def add(x,y):
 return x + y


@shared_task
def task_video_encoding_720p(video_id):
 logger.info('Video Processing started')
 try:
 video = Video.objects.get(video_id=video_id)
 input_file_path = video.video_file.path
 input_file_url = video.video_file.url
 input_file_name = video.video_file.name

 # get the filename (without extension)
 filename = os.path.basename(input_file_url)

 # path to the new file, change it according to where you want to put it
 output_file_name = os.path.join('videos', 'mp4', '{}.mp4'.format(filename))
 output_file_path = os.path.join(settings.MEDIA_ROOT, output_file_name)

 # 2-pass encoding
 for i in range(1):
 new_video_720p = subprocess.call([FFMPEG_PATH, '-i', input_file_path, '-s', '1280x720', '-vcodec', 'mpeg4', '-acodec', 'libvo_aacenc', '-b', '10000k', '-pass', i, '-r', '30', output_file_path])
 # new_video_720p = subprocess.call([FFMPEG_PATH, '-i', input_file_path, '-s', '{}x{}'.format(height * 16/9, height), '-vcodec', 'mpeg4', '-acodec', 'libvo_aacenc', '-b', '10000k', '-pass', i, '-r', '30', output_file_path])

 if new_video_720p == 0:
 # save the new file in the database
 # video.video_file_hd.name = output_file_name
 video.save(update_fields=['video_file_hd'])
 logger.info('Video Processing Finished')
 return video

 else:
 logger.info('Proceesing Failed.') # Just for now

 except:
 raise ValidationError('Something went wrong')


@shared_task
# def task_video_encoding_1080p(video_id, height):
def task_video_encoding_1080p(video_id):
 logger.info('Video Processing started')
 try:
 video = Video.objects.get(video_id=video_id)
 input_file_path = video.video_file.url
 input_file_name = video.video_file.name

 # get the filename (without extension)
 filename = os.path.basename(input_file_path)

 # path to the new file, change it according to where you want to put it
 output_file_name = os.path.join('videos', 'mp4', '{}.mp4'.format(filename))
 output_file_path = os.path.join(settings.MEDIA_ROOT, output_file_name)

 for i in range(1):
 new_video_1080p = subprocess.call([FFMPEG_PATH, '-i', input_file_path, '-s', '1920x1080', '-vcodec', 'mpeg4', '-acodec', 'libvo_aacenc', '-b', '10000k', '-pass', i, '-r', '30', output_file_path])

 if new_video_1080p == 0:
 # save the new file in the database
 # video.video_file_hd.name = output_file_name
 video.save(update_fields=['video_file_fullhd'])
 logger.info('Video Processing Finished')
 return video
 else:
 logger.info('Proceesing Failed.') # Just for now

 except:
 raise ValidationError('Something went wrong')



In my first attempt I wasn't triggering the tasks no where, then I've tried to trigger the task from the schema.py file from graphql inside the video_by_id, but there I've got this error :


backend-celery-1 | django.core.exceptions.ValidationError: ['Something went wrong']
backend-celery-1 | [2023-06-18 16:38:52,859: ERROR/ForkPoolWorker-4] Task video.tasks.task_video_encoding_1080p[d33b1a42-5914-467c-ad5c-00565bc8be6f] raised unexpected: ValidationError(['Something went wrong'])
backend-celery-1 | Traceback (most recent call last):
backend-celery-1 | File "/usr/src/backend/video/tasks.py", line 81, in task_video_encoding_1080p
backend-celery-1 | new_video_1080p = subprocess.call([FFMPEG_PATH, '-i', input_file_path, '-s', '1920x1080', '-vcodec', 'mpeg4', '-acodec', 'libvo_aacenc', '-b', '10000k', '-pass', i, '-r', '30', output_file_path])
backend-celery-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-celery-1 | File "/usr/local/lib/python3.11/subprocess.py", line 389, in call
backend-celery-1 | with Popen(*popenargs, **kwargs) as p:
backend-celery-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-celery-1 | File "/usr/local/lib/python3.11/subprocess.py", line 1026, in __init__
backend-celery-1 | self._execute_child(args, executable, preexec_fn, close_fds,
backend-celery-1 | File "/usr/local/lib/python3.11/subprocess.py", line 1883, in _execute_child
backend-celery-1 | self.pid = _fork_exec(
backend-celery-1 | ^^^^^^^^^^^
backend-celery-1 | TypeError: expected str, bytes or os.PathLike object, not int
backend-celery-1 | 
backend-celery-1 | During handling of the above exception, another exception occurred:
backend-celery-1 | 
backend-celery-1 | Traceback (most recent call last):
backend-celery-1 | File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 477, in trace_task
backend-celery-1 | R = retval = fun(*args, **kwargs)
backend-celery-1 | ^^^^^^^^^^^^^^^^^^^^
backend-celery-1 | File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 760, in __protected_call__
backend-celery-1 | return self.run(*args, **kwargs)
backend-celery-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
backend-celery-1 | File "/usr/src/backend/video/tasks.py", line 93, in task_video_encoding_1080p
backend-celery-1 | raise ValidationError('Something went wrong')
backend-celery-1 | django.core.exceptions.ValidationError: ['Something went wrong']



If anyone has done this kind of project or something like this please any suggestion or help is much appreciated.


Thank you in advance !


-
ffmpeg streaming UDP port is closed [closed]
26 décembre 2023, par BrilliantContractI'm trying to use ffmpeg in order to transcode RTSP stream from CCTV to HLS stream so it could be accessed through a web server.


ffmpeg used to stream video from CCTV with following command


$ ffmpeg -i "rtsp://cam-1.loc:554?user=admin&password=admin&channel=1&stream=0" -hls_time 3 -hls_wrap 10 -f mpegts udp://localhost:6601
ffmpeg version 4.2.8 Copyright (c) 2000-2022 the FFmpeg developers
 built with gcc 8 (GCC)
 configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --docdir=/usr/share/doc/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' --extra-ldflags='-Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld ' --extra-cflags=' ' --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvo-amrwbenc --enable-version3 --enable-bzlib --disable-crystalhd --enable-fontconfig --enable-frei0r --enable-gcrypt --enable-gnutls --enable-ladspa --enable-libaom --enable-libdav1d --enable-libass --enable-libbluray --enable-libcdio --enable-libdrm --enable-libjack --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libmp3lame --enable-nvenc --enable-openal --enable-opencl --enable-opengl --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librsvg --enable-libsrt --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-version3 --enable-vapoursynth --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg --enable-libzvbi --enable-avfilter --enable-avresample --enable-libmodplug --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-libmfx --enable-runtime-cpudetect
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
[rtsp @ 0x5576c7340600] getaddrinfo(cam-1.loc): Name or service not known
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, rtsp, from 'rtsp://cam-1.loc:554?user=admin&password=admin&channel=1&stream=0':
 Metadata:
 title : RTSP Session
 Duration: N/A, start: 0.000000, bitrate: N/A
 Stream #0:0: Video: h264 (Main), yuv420p(progressive), 1920x1080, 25 fps, 25 tbr, 90k tbn, 50 tbc
 Stream #0:1: Audio: pcm_alaw, 8000 Hz, mono, s16, 64 kb/s
Stream mapping:
 Stream #0:0 -> #0:0 (h264 (native) -> mpeg2video (native))
 Stream #0:1 -> #0:1 (pcm_alaw (native) -> mp2 (native))
Press [q] to stop, [?] for help
Output #0, mpegts, to 'udp://localhost:6601':
 Metadata:
 title : RTSP Session
 encoder : Lavf58.29.100
 Stream #0:0: Video: mpeg2video (Main), yuv420p, 1920x1080, q=2-31, 200 kb/s, 25 fps, 90k tbn, 25 tbc
 Metadata:
 encoder : Lavc58.54.100 mpeg2video
 Side data:
 cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
 Stream #0:1: Audio: mp2, 16000 Hz, mono, s16, 160 kb/s
 Metadata:
 encoder : Lavc58.54.100 mp2
[rtsp @ 0x5576c7340600] max delay reached. need to consume packette=5338.9kbits/s dup=0 drop=5 speed=1.12x 
[rtsp @ 0x5576c7340600] RTP: missed 3 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 6 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 6 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 5 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 4 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 5 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 6 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 5 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 6 packets
[h264 @ 0x5576c7993c80] concealing 972 DC, 972 AC, 972 MV errors in I frame
rtsp://cam-1.loc:554?user=admin&password=admin&channel=1&stream=0: corrupt decoded frame in stream 0=1.11x 
[rtsp @ 0x5576c7340600] max delay reached. need to consume packette=5298.4kbits/s dup=0 drop=5 speed=1.02x 
[rtsp @ 0x5576c7340600] RTP: missed 2 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 5 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 4 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 3 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 4 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 5 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 5 packets
[rtsp @ 0x5576c7340600] max delay reached. need to consume packet
[rtsp @ 0x5576c7340600] RTP: missed 2 packets
[h264 @ 0x5576c779b9c0] cabac decode of qscale diff failed at 66 60
[h264 @ 0x5576c779b9c0] error while decoding MB 66 60, bytestream 9825
[h264 @ 0x5576c779b9c0] concealing 943 DC, 943 AC, 943 MV errors in I frame
rtsp://cam-1.loc:554?user=admin&password=admin&channel=1&stream=0: corrupt decoded frame in stream 0=1.02x 
frame= 1315 fps= 25 q=31.0 Lsize= 34249kB time=00:00:53.32 bitrate=5261.8kbits/s dup=0 drop=5 speed=1.02x 
video:30544kB audio:1042kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 8.431973%



nmap used to check if 6601 port is open


$ nmap -Pn localhost -p 6601
Starting Nmap 7.70 ( https://nmap.org ) at 2023-12-26 10:47 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.0011s latency).
Other addresses for localhost (not scanned): ::1

PORT STATE SERVICE
6601/tcp closed mstmg-sstp

Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds



However, ffplayer able to play video stream


ffplay udp://localhost:6601
ffplay version 4.2.8 Copyright (c) 2003-2022 the FFmpeg developers
 built with gcc 8 (GCC)
 configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --docdir=/usr/share/doc/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' --extra-ldflags='-Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld ' --extra-cflags=' ' --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvo-amrwbenc --enable-version3 --enable-bzlib --disable-crystalhd --enable-fontconfig --enable-frei0r --enable-gcrypt --enable-gnutls --enable-ladspa --enable-libaom --enable-libdav1d --enable-libass --enable-libbluray --enable-libcdio --enable-libdrm --enable-libjack --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libmp3lame --enable-nvenc --enable-openal --enable-opencl --enable-opengl --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librsvg --enable-libsrt --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-version3 --enable-vapoursynth --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg --enable-libzvbi --enable-avfilter --enable-avresample --enable-libmodplug --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-libmfx --enable-runtime-cpudetect
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
[mpeg2video @ 0x7f1ad854afc0] Invalid frame dimensions 0x0. f=0/0 
 Last message repeated 7 times
Input #0, mpegts, from 'udp://localhost:6601':0KB sq= 0B f=0/0 
 Duration: N/A, start: 59.288000, bitrate: N/A
 Program 1 
 Metadata:
 service_name : RTSP Session
 service_provider: FFmpeg
 Stream #0:0[0x100]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
 Stream #0:1[0x101]: Audio: mp2 ([4][0][0][0] / 0x0004), 16000 Hz, mono, s16p, 160 kb/s



VLC cannot play the video stream :


vlc udp://localhost:6601
VLC media player 3.0.18 Vetinari (revision )
[000055769aa81ba0] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface.
Warning: Ignoring XDG_SESSION_TYPE=wayland on Gnome. Use QT_QPA_PLATFORM=wayland to run on Wayland anyway.
[00007fec64011e90] mjpeg demux error: cannot peek



ffprobe output


ffprobe udp://localhost:6601
ffprobe version 4.2.8 Copyright (c) 2007-2022 the FFmpeg developers
 built with gcc 8 (GCC)
 configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --docdir=/usr/share/doc/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' --extra-ldflags='-Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld ' --extra-cflags=' ' --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libvo-amrwbenc --enable-version3 --enable-bzlib --disable-crystalhd --enable-fontconfig --enable-frei0r --enable-gcrypt --enable-gnutls --enable-ladspa --enable-libaom --enable-libdav1d --enable-libass --enable-libbluray --enable-libcdio --enable-libdrm --enable-libjack --enable-libfreetype --enable-libfribidi --enable-libgsm --enable-libmp3lame --enable-nvenc --enable-openal --enable-opencl --enable-opengl --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-librsvg --enable-libsrt --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libvidstab --enable-libvmaf --enable-version3 --enable-vapoursynth --enable-libvpx --enable-libx264 --enable-libx265 --enable-libxvid --enable-libzimg --enable-libzvbi --enable-avfilter --enable-avresample --enable-libmodplug --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-libmfx --enable-runtime-cpudetect
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
[mpeg2video @ 0x55e1be0910c0] Invalid frame dimensions 0x0.
 Last message repeated 9 times
Input #0, mpegts, from 'udp://localhost:6601':
 Duration: N/A, start: 262.760000, bitrate: N/A
 Program 1 
 Metadata:
 service_name : RTSP Session
 service_provider: FFmpeg
 Stream #0:0[0x100]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p(tv, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
 Stream #0:1[0x101]: Audio: mp2 ([4][0][0][0] / 0x0004), 16000 Hz, mono, s16p, 160 kb/s



Why the video stream is not playing in VLC ?


Why nmap do not see that UPD port is open ?


-
ffmpeg failed to load audio file
14 avril 2024, par Vaishnav GhengeFailed to load audio: ffmpeg version 5.1.4-0+deb12u1 Copyright (c) Failed to load audio: ffmpeg version 5.1.4-0+deb12u1 Copyright (c) 2000-2023 the FFmpeg developers
 built with gcc 12 (Debian 12.2.0-14)
 configuration: --prefix=/usr --extra-version=0+deb12u1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librist --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --disable-sndio --enable-libjxl --enable-pocketsphinx --enable-librsvg --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-libplacebo --enable-librav1e --enable-shared
 libavutil 57. 28.100 / 57. 28.100
 libavcodec 59. 37.100 / 59. 37.100
 libavformat 59. 27.100 / 59. 27.100
 libavdevice 59. 7.100 / 59. 7.100
 libavfilter 8. 44.100 / 8. 44.100
 libswscale 6. 7.100 / 6. 7.100
 libswresample 4. 7.100 / 4. 7.100
 libpostproc 56. 6.100 / 56. 6.100
/tmp/tmpjlchcpdm.wav: Invalid data found when processing input



backend :



@app.route("/transcribe", methods=["POST"])
def transcribe():
 # Check if audio file is present in the request
 if 'audio_file' not in request.files:
 return jsonify({"error": "No file part"}), 400
 
 audio_file = request.files.get('audio_file')

 # Check if audio_file is sent in files
 if not audio_file:
 return jsonify({"error": "`audio_file` is missing in request.files"}), 400

 # Check if the file is present
 if audio_file.filename == '':
 return jsonify({"error": "No selected file"}), 400

 # Save the file with a unique name
 filename = secure_filename(audio_file.filename)
 unique_filename = os.path.join("uploads", str(uuid.uuid4()) + '_' + filename)
 # audio_file.save(unique_filename)
 
 # Read the contents of the audio file
 contents = audio_file.read()

 max_file_size = 500 * 1024 * 1024
 if len(contents) > max_file_size:
 return jsonify({"error": "File is too large"}), 400

 # Check if the file extension suggests it's a WAV file
 if not filename.lower().endswith('.wav'):
 # Delete the file if it's not a WAV file
 os.remove(unique_filename)
 return jsonify({"error": "Only WAV files are supported"}), 400

 print(f"\033[92m{filename}\033[0m")

 # Call Celery task asynchronously
 result = transcribe_audio.delay(contents)

 return jsonify({
 "task_id": result.id,
 "status": "pending"
 })


@celery_app.task
def transcribe_audio(contents):
 # Transcribe the audio
 try:
 # Create a temporary file to save the audio data
 with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as temp_audio:
 temp_path = temp_audio.name
 temp_audio.write(contents)

 print(f"\033[92mFile temporary path: {temp_path}\033[0m")
 transcribe_start_time = time.time()

 # Transcribe the audio
 transcription = transcribe_with_whisper(temp_path)
 
 transcribe_end_time = time.time()
 print(f"\033[92mTranscripted text: {transcription}\033[0m")

 return transcription, transcribe_end_time - transcribe_start_time

 except Exception as e:
 print(f"\033[92mError: {e}\033[0m")
 return str(e)



frontend :


useEffect(() => {
 const init = () => {
 navigator.mediaDevices.getUserMedia({audio: true})
 .then((audioStream) => {
 const recorder = new MediaRecorder(audioStream);

 recorder.ondataavailable = e => {
 if (e.data.size > 0) {
 setChunks(prevChunks => [...prevChunks, e.data]);
 }
 };

 recorder.onerror = (e) => {
 console.log("error: ", e);
 }

 recorder.onstart = () => {
 console.log("started");
 }

 recorder.start();

 setStream(audioStream);
 setRecorder(recorder);
 });
 }

 init();

 return () => {
 if (recorder && recorder.state === 'recording') {
 recorder.stop();
 }

 if (stream) {
 stream.getTracks().forEach(track => track.stop());
 }
 }
 }, []);

 useEffect(() => {
 // Send chunks of audio data to the backend at regular intervals
 const intervalId = setInterval(() => {
 if (recorder && recorder.state === 'recording') {
 recorder.requestData(); // Trigger data available event
 }
 }, 8000); // Adjust the interval as needed


 return () => {
 if (intervalId) {
 console.log("Interval cleared");
 clearInterval(intervalId);
 }
 };
 }, [recorder]);

 useEffect(() => {
 const processAudio = async () => {
 if (chunks.length > 0) {
 // Send the latest chunk to the server for transcription
 const latestChunk = chunks[chunks.length - 1];

 const audioBlob = new Blob([latestChunk]);
 convertBlobToAudioFile(audioBlob);
 }
 };

 void processAudio();
 }, [chunks]);

 const convertBlobToAudioFile = useCallback((blob: Blob) => {
 // Convert Blob to audio file (e.g., WAV)
 // This conversion may require using a third-party library or service
 // For example, you can use the MediaRecorder API to record audio in WAV format directly
 // Alternatively, you can use a library like recorderjs to perform the conversion
 // Here's a simplified example using recorderjs:

 const reader = new FileReader();
 reader.onload = () => {
 const audioBuffer = reader.result; // ArrayBuffer containing audio data

 // Send audioBuffer to Flask server or perform further processing
 sendAudioToFlask(audioBuffer as ArrayBuffer);
 };

 reader.readAsArrayBuffer(blob);
 }, []);

 const sendAudioToFlask = useCallback((audioBuffer: ArrayBuffer) => {
 const formData = new FormData();
 formData.append('audio_file', new Blob([audioBuffer]), `speech_audio.wav`);

 console.log(formData.get("audio_file"));

 fetch('http://34.87.75.138:8000/transcribe', {
 method: 'POST',
 body: formData
 })
 .then(response => response.json())
 .then((data: { task_id: string, status: string }) => {
 pendingTaskIdsRef.current.push(data.task_id);
 })
 .catch(error => {
 console.error('Error sending audio to Flask server:', error);
 });
 }, []);



I was trying to pass the audio from frontend to whisper model which is in flask app