You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 12, 2024. It is now read-only.
const run = async () => {
try {
await llama.load(config);
console.log("load complete")
await llama.getEmbedding(params).then(console.log);
} catch (error) {
console.error("Error loading model or generating embeddings: ", error);
}
};
run();`
I added a lot thing to debug it and find that it ends in the lin 44: await llama.load(config); the sequence is just stopped there and the software terminated. no errors were caught.
Mac book pro with m1 max
mac os 13.4 (22F66)
node js v20.3.0
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
`import { LLM } from "llama-node";
import { LLamaCpp } from "llama-node/dist/llm/llama-cpp.js";
import path from "path";
import fs from 'fs';
process.on('unhandledRejection', error => {
console.error('Unhandled promise rejection:', error);
});
const model = path.resolve(process.cwd(), "../llama.cpp/models/13B/ggml-model-q4_0.bin");
if (!fs.existsSync(model)) {
console.error("Model file does not exist: ", model);
}
const llama = new LLM(LLamaCpp);
//console.log("model:", model)
const config = {
modelPath: model,
enableLogging: true,
nCtx: 1024,
seed: 0,
f16Kv: false,
logitsAll: false,
vocabOnly: false,
useMlock: false,
embedding: true,
useMmap: true,
nGpuLayers: 0
};
//console.log("config:", config)
const prompt =
Who is the president of the United States?
;const params = {
nThreads: 4,
nTokPredict: 2048,
topK: 40,
topP: 0.1,
temp: 0.2,
repeatPenalty: 1.1,
prompt,
};
//console.log("params:", params)
try {
console.log("Loading model...");
await llama.load(config);
console.log("Model loaded");
} catch (error) {
console.error("Error loading model: ", error);
}
const response = await llama.createCompletion(params);
console.log(response)
const run = async () => {
try {
await llama.load(config);
console.log("load complete")
await llama.getEmbedding(params).then(console.log);
} catch (error) {
console.error("Error loading model or generating embeddings: ", error);
}
};
run();`
I added a lot thing to debug it and find that it ends in the lin 44: await llama.load(config); the sequence is just stopped there and the software terminated. no errors were caught.
Mac book pro with m1 max
mac os 13.4 (22F66)
node js v20.3.0
The text was updated successfully, but these errors were encountered: