22:11
I'm facing the error in the title which crashes my thread.
int decode(AVPacket* pPacket)
int count = 0;
int response = avcodec_send_packet(_pCodecContext, pPacket);
while(response >= 0)
AVFrame* pFrame = av_frame_alloc();
response = avcodec_receive_frame(_pCodecContext, pFrame);
This is the general structure of where I'm decoding the video into frames.
I do use seek_frame to change position around avstream to get different frames with this:
avformat_seek_file(_pFormat, -1, INT64_MIN, location, INT64_MAX, (...)
20:06
My bot is supposed to be some sort of jeopardy quiz show type of bot. /joinvc makes the bot connect to the call, however, I cant seem to make the bot make noise when its in a vc. Here's some of the code:
ⓐinteractions.slash_command(
name="press",
description="Press the button"
)
async def press(ctx: interactions.ComponentContext):
await ctx.send(f"ctx.author.mention has pressed the button")
vc = ctx.author.voice.channel
player = vc.create_ffmpeg_player('audiopath', after=lambda: print('done'))
player.start()
(...)
18:32
Based on this article it seems that it is possible to use FFMPEG to detect scene change in videos:
http://www.luckydinosaur.com/u/ffmpeg-scene-change-detector
Now I have a video that displays a book text and when the text (word or sentence) is spoken it gets highlighted.
Something like this audio book: https://youtu.be/lA7L6ZNVKjc
I need to know the timestamp when the text gets highlighted (hence scene change), this will allow me to add timestamp tags on my youtube video, so it becomes easier for listeners to navigate through the audiobook.
What is (...)
12:27
So i tried this command from windows
(echo file 'first file.mp4' & echo file 'second file.mp4' )>list.txt
ffmpeg -safe 0 -f concat -i list.txt -c copy output.mp4
while making a batch file
Then i concatenate two video's and as soon as the one video ended. It changes resolution, then the video player started to glitch out pixel's like in vlc.
Even if i typed
ffmpeg -safe 0 -f concat -i list.txt -vf scale=640:360 -c copy output.mp4
Nothing (...)
11:49
I am trying to stream my laptop's webcam using RTSP protocol using ffmpeg. I am trying to simulate an actual IP Camera. I have already tried different tools already including using VidGear python package, and gstreamer but could not get it working. Note: I am on Windows 10. I have tried this command:
ffmpeg -f dshow -s 320x240 -rtbufsize 2147.48M -r 30 -vcodec mjpeg -i video="HD Camera" -f rtsp -rtsp_transport tcp rtsp://localhost:8554/mystream
It turns on the webcam with but prints these logs to the console:
ffmpeg version 7.0-full_build-www.gyan.dev Copyright (c) (...)
13:41
In the process of using the ffmpeg module to edit video files i used the subprocess module
The code is as follows:
#trim bit
import subprocess
import os
seconds = "4"
mypath=os.path.abspath('trial.mp4')
subprocess.call(['ffmpeg', '-i',mypath, '-ss', seconds, 'trimmed.mp4'])
Error message:
Traceback (most recent call last):
File "C:\\moviepy-master\\resizer.py", line 29, in
subprocess.call(['ffmpeg', '-i',mypath, '-ss', seconds, (...)
13:16
I am trying to convert group of images .png to video .webm on Windows 10:
ffmpeg -i %03d.png output.webm
But I am getting this error:
'ffmpeg' is not recognized as an internal or external command, operable program or batch file.
12:48
i am using "ffmpeg_kit_flutter" to merge two videos with code
import 'dart:io';
import 'package:ffmpeg_kit_flutter/ffmpeg_kit.dart';
import 'package:ffmpeg_kit_flutter/abstract_session.dart';
import 'package:ffmpeg_kit_flutter/return_code.dart';
import 'package:flutter/material.dart';
import 'package:flutter_bloc/flutter_bloc.dart';
import 'package:wechat_assets_picker/wechat_assets_picker.dart';
import (...)
11:15
Is it possible to create a media server that will have just one endpoint that will receive several requests with camera protocols, and this server must return the stream from these cameras formatted?
Knowing that the front end will take these camera streams and place them in the browser
I Still searching about the subject
12:21
I need some help.
I'm building an web app that takes any audio format, converts into a .wav file and then passes it to 'azure.cognitiveservices.speech' for transcription.I'm building the web app via a container Dockerfile as I need to install ffmpeg to be able to convert non ".wav" audio files to ".wav" (as azure speech services only process wav files). For some odd reason, the 'speechsdk' class of 'azure.cognitiveservices.speech' fails to work when I install ffmpeg in the web app. The class works perfectly fine when I install it without ffpmeg (...)
08:34
When importing the PyAv module, I am unable to show an image with opencv using imshow()
Code without the PyAv module (works as expected)
import cv2
img = cv2.imread("test_image.jpeg")
cv2.imshow('image', img)
cv2.waitKey(0)
Code with the import (doesn't work, just hangs)
import cv2
import av
img = cv2.imread("test_image.jpeg")
cv2.imshow('image', img)
cv2.waitKey(0)
OS: Linux arch 5.18.3-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 09 Jun 2022 16:14:10 +0000 x86_64 GNU/Linux
Am I doing (...)
06:43
I have two videos. If one video length is 42 minutes 35 seconds and another video length is 46 seconds, now I need to add this 46 second video after every three minutes 50 seconds so that the added video should be exported as a separate video. After every three minutes and 50 seconds, those 46 seconds of video should be added and a separate video should be exported. This should be done for a total of 42 minutes and 35 seconds of video. Does this happen in FFMPEG?
how to add the (...)