
Recherche avancée
Médias (91)
-
GetID3 - Boutons supplémentaires
9 avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Core Media Video
4 avril 2013, par
Mis à jour : Juin 2013
Langue : français
Type : Video
-
The pirate bay depuis la Belgique
1er avril 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Image
-
Bug de détection d’ogg
22 mars 2013, par
Mis à jour : Avril 2013
Langue : français
Type : Video
-
Exemple de boutons d’action pour une collection collaborative
27 février 2013, par
Mis à jour : Mars 2013
Langue : français
Type : Image
-
Exemple de boutons d’action pour une collection personnelle
27 février 2013, par
Mis à jour : Février 2013
Langue : English
Type : Image
Autres articles (48)
-
MediaSPIP : Modification des droits de création d’objets et de publication définitive
11 novembre 2010, parPar défaut, MediaSPIP permet de créer 5 types d’objets.
Toujours par défaut les droits de création et de publication définitive de ces objets sont réservés aux administrateurs, mais ils sont bien entendu configurables par les webmestres.
Ces droits sont ainsi bloqués pour plusieurs raisons : parce que le fait d’autoriser à publier doit être la volonté du webmestre pas de l’ensemble de la plateforme et donc ne pas être un choix par défaut ; parce qu’avoir un compte peut servir à autre choses également, (...) -
HTML5 audio and video support
13 avril 2011, parMediaSPIP uses HTML5 video and audio tags to play multimedia files, taking advantage of the latest W3C innovations supported by modern browsers.
The MediaSPIP player used has been created specifically for MediaSPIP and can be easily adapted to fit in with a specific theme.
For older browsers the Flowplayer flash fallback is used.
MediaSPIP allows for media playback on major mobile platforms with the above (...) -
Support audio et vidéo HTML5
10 avril 2011MediaSPIP utilise les balises HTML5 video et audio pour la lecture de documents multimedia en profitant des dernières innovations du W3C supportées par les navigateurs modernes.
Pour les navigateurs plus anciens, le lecteur flash Flowplayer est utilisé.
Le lecteur HTML5 utilisé a été spécifiquement créé pour MediaSPIP : il est complètement modifiable graphiquement pour correspondre à un thème choisi.
Ces technologies permettent de distribuer vidéo et son à la fois sur des ordinateurs conventionnels (...)
Sur d’autres sites (8991)
-
How to fix av_interleaved_write_frame() broken pipe error in php
31 mars, par Adekunle AdeyeyeI have an issue using ffmpeg to stream audio and parse to google cloud speech to text in PHP.


It returns this output.
I have tried delaying some part of the script, that did not solve it.
I have also checked for similar questions. however, they are mostly in python and none of the solutions actually work for this.


built with gcc 8 (GCC)
 cpudetect
 libavutil 56. 31.100 / 56. 31.100
 libavcodec 58. 54.100 / 58. 54.100
 libavformat 58. 29.100 / 58. 29.100
 libavdevice 58. 8.100 / 58. 8.100
 libavfilter 7. 57.100 / 7. 57.100
 libavresample 4. 0. 0 / 4. 0. 0
 libswscale 5. 5.100 / 5. 5.100
 libswresample 3. 5.100 / 3. 5.100
 libpostproc 55. 5.100 / 55. 5.100
Input #0, mp3, from 'https://npr-ice.streamguys1.com/live.mp3':
 Metadata:
 icy-br : 96
 icy-description : NPR Program Stream
 icy-genre : News and Talk
 icy-name : NPR Program Stream
 icy-pub : 0
 StreamTitle :
 Duration: N/A, start: 0.000000, bitrate: 96 kb/s
 Stream #0:0: Audio: mp3, 32000 Hz, stereo, fltp, 96 kb/s
Stream mapping:
 Stream #0:0 -> #0:0 (mp3 (mp3float) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, s16le, to 'pipe:':
 Metadata:
 icy-br : 96
 icy-description : NPR Program Stream
 icy-genre : News and Talk
 icy-name : NPR Program Stream
 icy-pub : 0
 StreamTitle :
 encoder : Lavf58.29.100
 Stream #0:0: Audio: pcm_s16le, 16000 Hz, mono, s16, 256 kb/s
 Metadata:
 encoder : Lavc58.54.100 pcm_s16le
**av_interleaved_write_frame(): Broken pipe** 256.0kbits/s speed=1.02x
**Error writing trailer of pipe:: Broken pipe**
size= 54kB time=00:00:01.76 bitrate= 250.8kbits/s speed=0.465x
video:0kB audio:55kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!



this is my PHP code


require_once 'vendor/autoload.php';
 
 $projectId = "xxx-45512";
 putenv('GOOGLE_APPLICATION_CREDENTIALS=' . __DIR__ . '/xxx-45512-be3eb805f1d7.json');
 
 // Database connection
 $pdo = new PDO('mysql:host=localhost;dbname=', '', '');
 $pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
 
 $url = "https://npr-ice.streamguys1.com/live.mp3";
 
 $ffmpegCmd = "ffmpeg -re -i $url -acodec pcm_s16le -ac 1 -ar 16000 -f s16le -";
 
 $fp = popen($ffmpegCmd, "r");
 if (!$fp) {
 die("Failed to open FFmpeg stream.");
 }
 sleep(5);

 try {
 $client = new SpeechClient(['transport' => 'grpc', 'credentials' => json_decode(file_get_contents(getenv('GOOGLE_APPLICATION_CREDENTIALS')), true)]);
 } catch (Exception $e) {
 echo 'Error: ' . $e->getMessage(); 
 exit;
 }
 
 $recognitionConfig = new RecognitionConfig([
 'auto_decoding_config' => new AutoDetectDecodingConfig(),
 'language_codes' => ['en-US'],
 'model' => 'long',
 ]);
 
 $streamingConfig = new StreamingRecognitionConfig([
 'config' => $recognitionConfig,
 ]);
 
 $configRequest = new StreamingRecognizeRequest([
 'recognizer' => "projects/$projectId/locations/global/recognizers/_",
 'streaming_config' => $streamingConfig,
 ]);
 
 
 function streamAudio($fp)
 {
 while (!feof($fp)) {
 yield fread($fp, 4096);
 }
 }
 
 $responses = $client->streamingRecognize([
 'requests' => (function () use ($configRequest, $fp) {
 yield $configRequest; // Send initial config
 foreach (streamAudio($fp) as $audioChunk) {
 yield new StreamingRecognizeRequest(['audio' => $audioChunk]);
 }
 })()]
 );
 
 // $responses = $speechClient->streamingRecognize();
 // $responses->writeAll([$request,]);
 
 foreach ($responses as $response) {
 foreach ($response->getResults() as $result) {
 $transcript = $result->getAlternatives()[0]->getTranscript();
 // echo "Transcript: $transcript\n";
 
 // Insert into the database
 $stmt = $pdo->prepare("INSERT INTO transcriptions (transcript) VALUES (:transcript)");
 $stmt->execute(['transcript' => $transcript]);
 }
 }
 
 
 pclose($fp);
 $client->close();



I'm not sure what the issue is at this time.


UPDATE


I've done some more debugging and i have gotten the error to clear and to stream actually starts.
However, I expect the audio to transcribe and update my database but instead I get this error when i close the stream




this is my updated code


$handle = popen($ffmpegCommand, "r");

 try {
 $client = new SpeechClient(['transport' => 'grpc', 'credentials' => json_decode(file_get_contents(getenv('GOOGLE_APPLICATION_CREDENTIALS')), true)]);
 } catch (Exception $e) {
 echo 'Error: ' . $e->getMessage(); 
 exit;
 }
 
 try {
 $recognitionConfig = (new RecognitionConfig())
 ->setAutoDecodingConfig(new AutoDetectDecodingConfig())
 ->setLanguageCodes(['en-US'], ['en-UK'])
 ->setModel('long');
 } catch (Exception $e) {
 echo 'Error: ' . $e->getMessage(); 
 exit;
 }
 
 try {
 $streamConfig = (new StreamingRecognitionConfig())
 ->setConfig($recognitionConfig);
 } catch (Exception $e) {
 echo 'Error: ' . $e->getMessage();
 exit;
 }
 try {
 $configRequest = (new StreamingRecognizeRequest())
 ->setRecognizer("projects/$projectId/locations/global/recognizers/_")
 ->setStreamingConfig($streamConfig);
 } catch (Exception $e) {
 echo 'Error: ' . $e->getMessage(); 
 exit;
 }
 
 $stream = $client->streamingRecognize();
 $stream->write($configRequest);
 
 mysqli_query($conn, "INSERT INTO transcriptions (transcript) VALUES ('bef')");
 
 while (!feof($handle)) {
 $chunk = fread($handle, 25600);
 // printf('chunk: ' . $chunk);
 if ($chunk !== false) {
 try {
 $request = (new StreamingRecognizeRequest())
 ->setAudio($chunk);
 $stream->write($request);
 } catch (Exception $e) {
 printf('Errorc: ' . $e->getMessage());
 }
 }
 }
 
 
 $insr = json_encode($stream);
 mysqli_query($conn, "INSERT INTO transcriptions (transcript) VALUES ('$insr')");
 
 foreach ($stream->read() as $response) {
 mysqli_query($conn, "INSERT INTO transcriptions (transcript) VALUES ('loop1')");
 foreach ($response->getResults() as $result) {
 mysqli_query($conn, "INSERT INTO transcriptions (transcript) VALUES ('loop2')");
 foreach ($result->getAlternatives() as $alternative) {
 $trans = $alternative->getTranscript();
 mysqli_query($conn, "INSERT INTO transcriptions (transcript) VALUES ('$trans')");
 }
 }
 }
 
 pclose($handle);
 $stream->close();
 $client->close();```



-
avformat/iamf : fix setting channel layout for Scalable layers
17 juin, par James Almeravformat/iamf : fix setting channel layout for Scalable layers
The way streams are coded in an IAMF struct follows a scalable model where the
channel layouts for each layer may not match the channel order our API can
represent in a Native order layout.For example, an audio element may have six coded streams in the form of two
stereo streams, followed by two mono streams, and then by another two stereo
streams, for a total of 10 channels, and define for them four scalable layers
with loudspeaker_layout values "Stereo", "5.1ch", "5.1.2ch", and "5.1.4ch".
The first layer references the first stream, and each following layer will
reference all previous streams plus extra ones.
In this case, the "5.1ch" layer will reference four streams (the first two
stereo and the two mono) to encompass six channels, which does not match out
native layout 5.1(side) given that FC and LFE come after FL+FR but before
SL+SR, and here, they are at the end.For this reason, we need to build Custom order layouts that properly represent
what we're exporting.
Before :
Stream group #0:0[0x12c] : IAMF Audio Element :
Layer 0 : stereo
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Layer 1 : 5.1(side)
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Layer 2 : 5.1.2
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:4[0x4] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Layer 3 : 5.1.4
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:4[0x4] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:5[0x5] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
AFter :
Stream group #0:0[0x12c] : IAMF Audio Element :
Layer 0 : stereo
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Layer 1 : 6 channels (FL+FR+SL+SR+FC+LFE)
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Layer 2 : 8 channels (FL+FR+SL+SR+FC+LFE+TFL+TFR)
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:4[0x4] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Layer 3 : 10 channels (FL+FR+SL+SR+FC+LFE+TFL+TFR+TBL+TBR)
Stream #0:0[0x0] : Audio : opus, 48000 Hz, stereo, fltp (default)
Stream #0:1[0x1] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:2[0x2] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:3[0x3] : Audio : opus, 48000 Hz, mono, fltp (dependent)
Stream #0:4[0x4] : Audio : opus, 48000 Hz, stereo, fltp (dependent)
Stream #0:5[0x5] : Audio : opus, 48000 Hz, stereo, fltp (dependent)Signed-off-by : James Almer <jamrial@gmail.com>
- [DH] libavformat/iamf_parse.c
- [DH] libavformat/iamf_writer.c
- [DH] libavformat/iamfdec.c
- [DH] tests/ref/fate/iamf-5_1-copy
- [DH] tests/ref/fate/iamf-5_1-demux
- [DH] tests/ref/fate/iamf-5_1_4
- [DH] tests/ref/fate/iamf-7_1_4
- [DH] tests/ref/fate/iamf-9_1_6
- [DH] tests/ref/fate/mov-mp4-iamf-5_1_4
- [DH] tests/ref/fate/mov-mp4-iamf-7_1_4-video-first
- [DH] tests/ref/fate/mov-mp4-iamf-7_1_4-video-last
-
Announcing our latest open source project : DeviceDetector
This blog post is an announcement for our latest open source project release : DeviceDetector ! The Universal Device Detection library will parse any User Agent and detect the browser, operating system, device used (desktop, tablet, mobile, tv, cars, console, etc.), brand and model.
Read on to learn more about this exciting release.
Why did we create DeviceDetector ?
Our previous library UserAgentParser only had the possibility to detect operating systems and browsers. But as more and more traffic is coming from mobile devices like smartphones and tablets it is getting more and more important to know which devices are used by the websites visitors.
To ensure that the device detection within Piwik will gain the required attention, so it will be as accurate as possible, we decided to move that part of Piwik into a separate project, that we will maintain separately. As an own project we hope the DeviceDetector will gain a better visibility as well as a better support by and for the community !
DeviceDetector is hosted on GitHub at piwik/device-detector. It is also available as composer package through Packagist.
How DeviceDetector works
Every client requesting data from a webserver identifies itself by sending a so-called User-Agent within the request to the server. Those User Agents might contain several information such as :
- client name and version (clients can be browsers or other software like feed readers, media players, apps,…)
- operating system name and version
- device identifier, which can be used to detect the brand and model.
For Example :
Mozilla/5.0 (Linux; Android 4.4.2; Nexus 5 Build/KOT49H) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/32.0.1700.99 Mobile Safari/537.36
This User Agent contains following information :
Operating system is
Android 4.4.2
, client uses the browserChrome Mobile 32.0.1700.99
and the device is a GoogleNexus 5
smartphone.What DeviceDetector currently detects
DeviceDetector is able to detect bots, like search engines, feed fetchers, site monitors and so on, five different client types, including around 100 browsers, 15 feed readers, some media players, personal information managers (like mail clients) and mobile apps using the AFNetworking framework, around 80 operating systems and nine different device types (smartphones, tablets, feature phones, consoles, tvs, car browsers, cameras, smart displays and desktop devices) from over 180 brands.
Note : Piwik itself currently does not use the full feature set of DeviceDetector. Client detection is currently not implemented in Piwik (only detected browsers are reported, other clients are marked as Unknown). Client detection will be implemented into Piwik in the future, follow #5413 to stay updated.
Performance of DeviceDetector
Our detections are currently handled by an enormous number of regexes, that are defined in several .YML Files. As parsing these .YML files is a bit slow, DeviceDetector is able to cache the parsed .YML Files. By default DeviceDetector uses a static cache, which means that everything is cached in static variables. As that only improves speed for many detections within one process, there are also adapters to cache in files or memcache for speeding up detections across requests.
How can users help contribute to DeviceDetector ?
Submit your devices that are not detected yet
If you own a device, that is currently not correctly detected by the DeviceDetector, please create a issue on GitHub
In order to check if your device is detected correctly by the DeviceDetector go to your Piwik server, click on ‘Settings’ link, then click on ‘Device Detection’ under the Diagnostic menu. If the data does not match, please copy the displayed User Agent and use that and your device data to create a ticket.Submit a list of your User Agents
In order to create new detections or improve the existing ones, it is necessary for us to have lists of User Agents. If you have a website used by mostly non desktop devices it would be useful if you send a list of the User Agents that visited your website. To do so you need access to your access logs. The following command will extract the User Agents :
zcat ~/path/to/access/logs* | awk -F'"' '{print $6}' | sort | uniq -c | sort -rn | head -n20000 > /home/piwik/top-user-agents.txt
If you want to help us with those data, please get in touch at devicedetector@piwik.org
Submit improvements on GitHub
As DeviceDetector is free/libre library, we invite you to help us improving the detections as well as the code. Please feel free to create tickets and pull requests on Github.
What’s the next big thing for DeviceDetector ?
Please check out the list of issues in device-detector issue tracker.
We hope the community will answer our call for help. Together, we can build DeviceDetector as the most powerful device detection library !
Happy Device Detection,