
Recherche avancée
Médias (2)
-
Granite de l’Aber Ildut
9 septembre 2011, par
Mis à jour : Septembre 2011
Langue : français
Type : Texte
-
Géodiversité
9 septembre 2011, par ,
Mis à jour : Août 2018
Langue : français
Type : Texte
Autres articles (59)
-
Les vidéos
21 avril 2011, parComme les documents de type "audio", Mediaspip affiche dans la mesure du possible les vidéos grâce à la balise html5 .
Un des inconvénients de cette balise est qu’elle n’est pas reconnue correctement par certains navigateurs (Internet Explorer pour ne pas le nommer) et que chaque navigateur ne gère en natif que certains formats de vidéos.
Son avantage principal quant à lui est de bénéficier de la prise en charge native de vidéos dans les navigateur et donc de se passer de l’utilisation de Flash et (...) -
Installation en mode ferme
4 février 2011, parLe mode ferme permet d’héberger plusieurs sites de type MediaSPIP en n’installant qu’une seule fois son noyau fonctionnel.
C’est la méthode que nous utilisons sur cette même plateforme.
L’utilisation en mode ferme nécessite de connaïtre un peu le mécanisme de SPIP contrairement à la version standalone qui ne nécessite pas réellement de connaissances spécifique puisque l’espace privé habituel de SPIP n’est plus utilisé.
Dans un premier temps, vous devez avoir installé les mêmes fichiers que l’installation (...) -
Websites made with MediaSPIP
2 mai 2011, parThis page lists some websites based on MediaSPIP.
Sur d’autres sites (5206)
-
FFMPEG multi livestream - recorded stream send to different services like YT and Twitch at different time (on different button clicks )
4 octobre 2022, par GaneshTrying for the last 10 days and still no success, I am creating a python application that will accept the URL and visit that URL using chromium, capture that screen and send that real-time screen recording to different live stream acceptors as youtube live, twitch Twitter, Facebook live or some other sources and many of these could be multiple.


There are two challenges (both challenges depend on a user action like different button clicks) -


- 

- The time of starting the Livestream we know only one Livestream acceptor and other acceptors could be sent via another API at any time or may not be sent on the whole live stream.
- Any of the streams could be stopped at any moment including the first one which started the original live streaming service






To Solve these challenges I am trying the following process (i took mp4 as a source for simplifying)


- 

- create a stream and store it into PIPE.stdout




ffmpeg_Command_get_stream = 'ffmpeg -re -i test.mp4 -f flv pipe:1'
ffmpeg_Command_get_stream=ffmpeg_Command_get_stream.split()
pipe = sp.Popen(ffmpeg_Command_get_stream,
 stdout=sp.PIPE,
 stderr=sp.PIPE,
 bufsize=8000000,
 shell=True,
 universal_newlines=True
 )
out,err = pipe.communicate()



- 

-
and send that stream with the help of FFMPEG to the Livestream acceptor with the click of the youtube Livestream button


ffmpeg_Command_send_stream = ['ffmpeg','-i',pipe.stdout,'-f','flv',RTMPURL_YOUTUBE]






Update Trying to Explain it a little more :


step 1 - I need a real-time stream from the first command, so I used -re in FFMPEG


step 2 - Use above stream as an input for other command and send that as an output as a Livestream to youtube (or twitch/Facebook), But the second step would happen only when the user click on the button "YT LiveStream", Here the tricky thing is there are multiple buttons (YT LiveStream, Twitch LiveStream, Facebook LiveStream) and user can click any time on any of button, also can click on all button one by one.




sorry for bad explaination


what I am doing wrong ? , Is this Possible ? or need to go with another process,


any help would be greatly appreciated


-
using speech diarization results in speech recognition API
10 septembre 2021, par FIREI'm trying to understand more about speech diarization and speech recognition . I started following this tutorial and I was able to get tubles of the audio labeling .


According to the tutorial you can use google speech API and send the audio segments to googles API and it will get transcribed and That is exactly where I'm stuck at !


According to the tutorial All you have to do is


- 

- Get a Google /Ibm watson API speech to text (done)




(I have done this step and got Watson API key and url !)


1.For each tuple element ‘ele’ in your labelling file, extract ele[0] as the speaker label, ele1 as the start time and ele[2] as the end time.


(I didn't understand this step at all ... I tried this , but I'm not quit sure if this is what they mean)



for ele in labelling:
 speaker_label = ele[0]
 start_time = ele[1]
 end_time=ele[2]




2.Trim your original audio file from start time to end time. You can use ffmpeg for this task.


(This step depends on step 1 ,but I also don't understand it as I have no idea how to use ffmpeg or how to utilize it for this project)


3.Pass the trimmed audio file obtained in the previous step to Google’s API/ Ibm watson API which will return you the text transcript of this audio segment.


(I just need to understand the context or the code of how to pass the segmented audio and how it will look like)


4.Write the transcript along with the speaker label to a text file and save it.


Any help would be more than appreciated !


My Full code :


from resemblyzer import preprocess_wav, VoiceEncoder
from pathlib import Path

from resemblyzer.audio import sampling_rate

from spectralcluster import SpectralClusterer

import ffmpeg

from ibm_watson import SpeechToTextV1
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator

# Ibm related components (Not used as it's not implemented )
authenticator = IAMAuthenticator('Key here')
speech_to_text = SpeechToTextV1(
 authenticator=authenticator
)


speech_to_text.set_service_url(
 'URL HERE')

#-------------------------------------------------------

#From the tutorial this part is to get the audio file and to process it 

# give the file path to your audio file
audio_file_path = 'Audio files/testForTheOthers.wav'
wav_fpath = Path(audio_file_path)

wav = preprocess_wav(wav_fpath)
encoder = VoiceEncoder("cpu")
_, cont_embeds, wav_splits = encoder.embed_utterance(wav, return_partials=True, rate=16)
print(cont_embeds.shape)



#-----------------------------------------------------------------------


#From the tutorial this is the clustering part
#(some parts of the code got me error that is why they are not included)
# (p_percentile=0.90,gaussian_blur_sigma=1) got removed (Errors)

clusterer = SpectralClusterer(
 min_clusters=2,
 max_clusters=100,
)

labels = clusterer.predict(cont_embeds)
#-----------------------------------------------------------------------



#From the tutorial this is the clustering part


def create_labelling(labels, wav_splits):
 from resemblyzer.audio import sampling_rate
 times = [((s.start + s.stop) / 2) / sampling_rate for s in wav_splits]
 labelling = []
 start_time = 0

 for i, time in enumerate(times):
 if i > 0 and labels[i] != labels[i - 1]:
 temp = [str(labels[i - 1]), start_time, time]
 labelling.append(tuple(temp))
 start_time = time
 if i == len(times) - 1:
 temp = [str(labels[i]), start_time, time]
 labelling.append(tuple(temp))

 return labelling


labelling = create_labelling(labels, wav_splits)


print(labelling)
#----------------------

#Me Trying to implement step 1

for ele in labelling:
 speaker_label = ele[0]
 start_time = ele[1]
 end_time=ele[2]


#-----------------------------------------------------------------------------

#After this part you are supposed to implement the rest of the tutorial 
#but I'm stuck





-
The problem of code that generated typescript (node-fluent-ffmpeg module)
10 décembre 2022, par Steve RockThis is my typescript code :



import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { FfmpegCommand } from 'fluent-ffmpeg'

async function bootstrap() {
 const app = await NestFactory.create(AppModule);
 let test

 try {
 test = new FfmpegCommand('./adventure.mkv');
 } catch (error) {
 console.log(error);

 }

 await app.listen(3000);
}

bootstrap();




Generated Javascript code :



"use strict";
var __awaiter = (this && this.__awaiter) || function (thisArg, _arguments, P, generator) {
 function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }
 return new (P || (P = Promise))(function (resolve, reject) {
 function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }
 function rejected(value) { try { step(generator["throw"](value)); } catch (e) { reject(e); } }
 function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }
 step((generator = generator.apply(thisArg, _arguments || [])).next());
 });
};
Object.defineProperty(exports, "__esModule", { value: true });
const core_1 = require("@nestjs/core");
const app_module_1 = require("./app.module");
const fluent_ffmpeg_1 = require("fluent-ffmpeg");
function bootstrap() {
 return __awaiter(this, void 0, void 0, function* () {
 const app = yield core_1.NestFactory.create(app_module_1.AppModule);
 let test;
 try {
 test = new fluent_ffmpeg_1.FfmpegCommand('./adventure.mkv');
 }
 catch (error) {
 console.log(error);
 }
 yield app.listen(3000);
 });
}
bootstrap();
//# sourceMappingURL=main.js.map




When I run this application I've next error :



main.ts:12
message :"fluent_ffmpeg_1.FfmpegCommand is not a constructor"
stack :"TypeError : fluent_ffmpeg_1.FfmpegCommand is not a constructor\n at c :\nest\dist\src\main.js:20:20\n at Generator.next ()\n at fulfilled (c :\nest\dist\src\main.js:5:58)\n at process._tickCallback (internal/process/next_tick.js:68:7)\n at Function.Module.runMain (internal/modules/cjs/loader.js:834:11)\n at startup (internal/bootstrap/node.js:283:19)\n at bootstrapNodeJSCore (internal/bootstrap/node.js:622:3)"



That's beacause this raw test = new fluent_ffmpeg_1.FfmpegCommand('./adventure.mkv'). When I change this on just test = new fluent_ffmpeg_1('./adventure.mkv') I haven't the error. Do you know how to fix it. If you know where are ffmpeg exapmles on typescript please share with me :)