Releases: janhq/cortex.cpp
Releases · janhq/cortex.cpp
0.5.0-36
v0.5.0-36 Fix brew build from source error
0.5.0-34
Changes
- fix: fix checksum undefined @marknguyen1302 (#991)
- feat: support unload engine @vansangpfiev (#989)
- cortex js allow tag x.y.z-t @hiento09 (#990)
- add build-deps for cortex-cpp binding @hiento09 (#988)
- fix: env does not work after bundled in an electron app @louis-jan (#987)
- feat: download model checksum @marknguyen1302 (#986)
- Windows uninstaller should stop cortex and remove cortex home folder for all users @hiento09 (#984)
- chore: name cortex processes @louis-jan (#983)
- fix: correct swagger port @marknguyen1302 (#980)
- fix: missing utils reference @louis-jan (#978)
- fix: force change localhost @marknguyen1302 (#977)
- chore: bump cpp version to 0.5.0-27 @marknguyen1302 (#976)
- chore: return error on pulling an existing model @louis-jan (#975)
- feat: add config cpp port @marknguyen1302 (#973)
- chore: clean logs periodically @louis-jan (#972)
- Fix codesign macos for application and installer @hiento09 (#974)
- fix: persists engine version on init @louis-jan (#969)
- feat: support multi process in cortex @marknguyen1302 (#965)
- Update the Base URL for the API reference to reflect the new URL @irfanpena (#971)
- fix: disable devtool @marknguyen1302 (#967)
- Fix codesign windows @hiento09 (#964)
- fix: create message request with string content instead of array @louis-jan (#963)
- fix: cortex engines init api with empty body should init default options @louis-jan (#962)
- fix: change default host cortex @marknguyen1302 (#961)
- feat: watch models and engines update for proper data retrieval @louis-jan (#960)
- fix: fix issue when re-download aborted model @marknguyen1302 (#959)
- Add launchpad uninstaller prerm and postrm @hiento09 (#957)
- feat: support martian nvidia engine @marknguyen1302 (#956)
- chore: delete local file when abort download @marknguyen1302 (#954)
- fix: terminate system should kill cortex process @louis-jan (#955)
- fix: start and run models are not outputting last error logs @louis-jan (#951)
- fix: remove hardcode stream @marknguyen1302 (#952)
- chore: persist context length and ngl from gguf file @louis-jan (#947)
- chore: show error from remote engine, lint @marknguyen1302 (#949)
- Fix openai api pipeline @hiento09 (#948)
- chore: destroy dangling processes on uninstall @louis-jan (#945)
- feat: support openrouter, cohere engine @marknguyen1302 (#946)
- feature: support local model pull @louis-jan (#944)
- chore: handle failed download @marknguyen1302 (#943)
- chore: specify engine version to pull @louis-jan (#942)
- bump cortex llamacpp for llama 3.1 @Van-QA (#941)
Contributor
@Van-QA, @hiento09, @irfanpena, @louis-jan, @marknguyen1302 and @vansangpfiev
0.5.0-33
v0.5.0-33 feat: add unload engine
0.5.0-31
Changes
- fix: env does not work after bundled in an electron app @louis-jan (#987)
- feat: download model checksum @marknguyen1302 (#986)
- Windows uninstaller should stop cortex and remove cortex home folder for all users @hiento09 (#984)
- chore: name cortex processes @louis-jan (#983)
- fix: correct swagger port @marknguyen1302 (#980)
- fix: missing utils reference @louis-jan (#978)
- fix: force change localhost @marknguyen1302 (#977)
- chore: bump cpp version to 0.5.0-27 @marknguyen1302 (#976)
- chore: return error on pulling an existing model @louis-jan (#975)
- feat: add config cpp port @marknguyen1302 (#973)
- chore: clean logs periodically @louis-jan (#972)
- Fix codesign macos for application and installer @hiento09 (#974)
- fix: persists engine version on init @louis-jan (#969)
- feat: support multi process in cortex @marknguyen1302 (#965)
- Update the Base URL for the API reference to reflect the new URL @irfanpena (#971)
- fix: disable devtool @marknguyen1302 (#967)
- Fix codesign windows @hiento09 (#964)
- fix: create message request with string content instead of array @louis-jan (#963)
- fix: cortex engines init api with empty body should init default options @louis-jan (#962)
- fix: change default host cortex @marknguyen1302 (#961)
- feat: watch models and engines update for proper data retrieval @louis-jan (#960)
- fix: fix issue when re-download aborted model @marknguyen1302 (#959)
- Add launchpad uninstaller prerm and postrm @hiento09 (#957)
- feat: support martian nvidia engine @marknguyen1302 (#956)
- chore: delete local file when abort download @marknguyen1302 (#954)
- fix: terminate system should kill cortex process @louis-jan (#955)
- fix: start and run models are not outputting last error logs @louis-jan (#951)
- fix: remove hardcode stream @marknguyen1302 (#952)
- chore: persist context length and ngl from gguf file @louis-jan (#947)
- chore: show error from remote engine, lint @marknguyen1302 (#949)
- Fix openai api pipeline @hiento09 (#948)
- chore: destroy dangling processes on uninstall @louis-jan (#945)
- feat: support openrouter, cohere engine @marknguyen1302 (#946)
- feature: support local model pull @louis-jan (#944)
- chore: handle failed download @marknguyen1302 (#943)
- chore: specify engine version to pull @louis-jan (#942)
- bump cortex llamacpp for llama 3.1 @Van-QA (#941)
Contributor
@Van-QA, @hiento09, @irfanpena, @louis-jan and @marknguyen1302
0.5.0-30
Changes
- chore: name cortex processes @louis-jan (#983)
- fix: correct swagger port @marknguyen1302 (#980)
- fix: missing utils reference @louis-jan (#978)
- fix: force change localhost @marknguyen1302 (#977)
- chore: bump cpp version to 0.5.0-27 @marknguyen1302 (#976)
- chore: return error on pulling an existing model @louis-jan (#975)
- feat: add config cpp port @marknguyen1302 (#973)
- chore: clean logs periodically @louis-jan (#972)
- Fix codesign macos for application and installer @hiento09 (#974)
- fix: persists engine version on init @louis-jan (#969)
- feat: support multi process in cortex @marknguyen1302 (#965)
- Update the Base URL for the API reference to reflect the new URL @irfanpena (#971)
- fix: disable devtool @marknguyen1302 (#967)
- Fix codesign windows @hiento09 (#964)
- fix: create message request with string content instead of array @louis-jan (#963)
- fix: cortex engines init api with empty body should init default options @louis-jan (#962)
- fix: change default host cortex @marknguyen1302 (#961)
- feat: watch models and engines update for proper data retrieval @louis-jan (#960)
- fix: fix issue when re-download aborted model @marknguyen1302 (#959)
- Add launchpad uninstaller prerm and postrm @hiento09 (#957)
- feat: support martian nvidia engine @marknguyen1302 (#956)
- chore: delete local file when abort download @marknguyen1302 (#954)
- fix: terminate system should kill cortex process @louis-jan (#955)
- fix: start and run models are not outputting last error logs @louis-jan (#951)
- fix: remove hardcode stream @marknguyen1302 (#952)
- chore: persist context length and ngl from gguf file @louis-jan (#947)
- chore: show error from remote engine, lint @marknguyen1302 (#949)
- Fix openai api pipeline @hiento09 (#948)
- chore: destroy dangling processes on uninstall @louis-jan (#945)
- feat: support openrouter, cohere engine @marknguyen1302 (#946)
- feature: support local model pull @louis-jan (#944)
- chore: handle failed download @marknguyen1302 (#943)
- chore: specify engine version to pull @louis-jan (#942)
- bump cortex llamacpp for llama 3.1 @Van-QA (#941)
Contributor
@Van-QA, @hiento09, @irfanpena, @louis-jan and @marknguyen1302
0.5.0-29
Changes
- fix: missing utils reference @louis-jan (#978)
- fix: force change localhost @marknguyen1302 (#977)
- chore: bump cpp version to 0.5.0-27 @marknguyen1302 (#976)
- chore: return error on pulling an existing model @louis-jan (#975)
- feat: add config cpp port @marknguyen1302 (#973)
- chore: clean logs periodically @louis-jan (#972)
- Fix codesign macos for application and installer @hiento09 (#974)
- fix: persists engine version on init @louis-jan (#969)
- feat: support multi process in cortex @marknguyen1302 (#965)
- Update the Base URL for the API reference to reflect the new URL @irfanpena (#971)
- fix: disable devtool @marknguyen1302 (#967)
- Fix codesign windows @hiento09 (#964)
- fix: create message request with string content instead of array @louis-jan (#963)
- fix: cortex engines init api with empty body should init default options @louis-jan (#962)
- fix: change default host cortex @marknguyen1302 (#961)
- feat: watch models and engines update for proper data retrieval @louis-jan (#960)
- fix: fix issue when re-download aborted model @marknguyen1302 (#959)
- Add launchpad uninstaller prerm and postrm @hiento09 (#957)
- feat: support martian nvidia engine @marknguyen1302 (#956)
- chore: delete local file when abort download @marknguyen1302 (#954)
- fix: terminate system should kill cortex process @louis-jan (#955)
- fix: start and run models are not outputting last error logs @louis-jan (#951)
- fix: remove hardcode stream @marknguyen1302 (#952)
- chore: persist context length and ngl from gguf file @louis-jan (#947)
- chore: show error from remote engine, lint @marknguyen1302 (#949)
- Fix openai api pipeline @hiento09 (#948)
- chore: destroy dangling processes on uninstall @louis-jan (#945)
- feat: support openrouter, cohere engine @marknguyen1302 (#946)
- feature: support local model pull @louis-jan (#944)
- chore: handle failed download @marknguyen1302 (#943)
- chore: specify engine version to pull @louis-jan (#942)
- bump cortex llamacpp for llama 3.1 @Van-QA (#941)
Contributor
@Van-QA, @hiento09, @irfanpena, @louis-jan and @marknguyen1302
0.5.0-28
Changes
- fix: force change localhost @marknguyen1302 (#977)
- chore: bump cpp version to 0.5.0-27 @marknguyen1302 (#976)
- chore: return error on pulling an existing model @louis-jan (#975)
- feat: add config cpp port @marknguyen1302 (#973)
- chore: clean logs periodically @louis-jan (#972)
- Fix codesign macos for application and installer @hiento09 (#974)
- fix: persists engine version on init @louis-jan (#969)
- feat: support multi process in cortex @marknguyen1302 (#965)
- Update the Base URL for the API reference to reflect the new URL @irfanpena (#971)
- fix: disable devtool @marknguyen1302 (#967)
- Fix codesign windows @hiento09 (#964)
- fix: create message request with string content instead of array @louis-jan (#963)
- fix: cortex engines init api with empty body should init default options @louis-jan (#962)
- fix: change default host cortex @marknguyen1302 (#961)
- feat: watch models and engines update for proper data retrieval @louis-jan (#960)
- fix: fix issue when re-download aborted model @marknguyen1302 (#959)
- Add launchpad uninstaller prerm and postrm @hiento09 (#957)
- feat: support martian nvidia engine @marknguyen1302 (#956)
- chore: delete local file when abort download @marknguyen1302 (#954)
- fix: terminate system should kill cortex process @louis-jan (#955)
- fix: start and run models are not outputting last error logs @louis-jan (#951)
- fix: remove hardcode stream @marknguyen1302 (#952)
- chore: persist context length and ngl from gguf file @louis-jan (#947)
- chore: show error from remote engine, lint @marknguyen1302 (#949)
- Fix openai api pipeline @hiento09 (#948)
- chore: destroy dangling processes on uninstall @louis-jan (#945)
- feat: support openrouter, cohere engine @marknguyen1302 (#946)
- feature: support local model pull @louis-jan (#944)
- chore: handle failed download @marknguyen1302 (#943)
- chore: specify engine version to pull @louis-jan (#942)
- bump cortex llamacpp for llama 3.1 @Van-QA (#941)
Contributor
@Van-QA, @hiento09, @irfanpena, @louis-jan and @marknguyen1302
0.5.0-27
Changes
- chore: return error on pulling an existing model @louis-jan (#975)
- feat: add config cpp port @marknguyen1302 (#973)
- chore: clean logs periodically @louis-jan (#972)
- Fix codesign macos for application and installer @hiento09 (#974)
- fix: persists engine version on init @louis-jan (#969)
- feat: support multi process in cortex @marknguyen1302 (#965)
- Update the Base URL for the API reference to reflect the new URL @irfanpena (#971)
- fix: disable devtool @marknguyen1302 (#967)
- Fix codesign windows @hiento09 (#964)
- fix: create message request with string content instead of array @louis-jan (#963)
- fix: cortex engines init api with empty body should init default options @louis-jan (#962)
- fix: change default host cortex @marknguyen1302 (#961)
- feat: watch models and engines update for proper data retrieval @louis-jan (#960)
- fix: fix issue when re-download aborted model @marknguyen1302 (#959)
- Add launchpad uninstaller prerm and postrm @hiento09 (#957)
- feat: support martian nvidia engine @marknguyen1302 (#956)
- chore: delete local file when abort download @marknguyen1302 (#954)
- fix: terminate system should kill cortex process @louis-jan (#955)
- fix: start and run models are not outputting last error logs @louis-jan (#951)
- fix: remove hardcode stream @marknguyen1302 (#952)
- chore: persist context length and ngl from gguf file @louis-jan (#947)
- chore: show error from remote engine, lint @marknguyen1302 (#949)
- Fix openai api pipeline @hiento09 (#948)
- chore: destroy dangling processes on uninstall @louis-jan (#945)
- feat: support openrouter, cohere engine @marknguyen1302 (#946)
- feature: support local model pull @louis-jan (#944)
- chore: handle failed download @marknguyen1302 (#943)
- chore: specify engine version to pull @louis-jan (#942)
- bump cortex llamacpp for llama 3.1 @Van-QA (#941)
Contributor
@Van-QA, @hiento09, @irfanpena, @louis-jan and @marknguyen1302
0.5.0-26
Changes
- feat: add config cpp port @marknguyen1302 (#973)
- chore: clean logs periodically @louis-jan (#972)
- Fix codesign macos for application and installer @hiento09 (#974)
- fix: persists engine version on init @louis-jan (#969)
- feat: support multi process in cortex @marknguyen1302 (#965)
- Update the Base URL for the API reference to reflect the new URL @irfanpena (#971)
- fix: disable devtool @marknguyen1302 (#967)
- Fix codesign windows @hiento09 (#964)
- fix: create message request with string content instead of array @louis-jan (#963)
- fix: cortex engines init api with empty body should init default options @louis-jan (#962)
- fix: change default host cortex @marknguyen1302 (#961)
- feat: watch models and engines update for proper data retrieval @louis-jan (#960)
- fix: fix issue when re-download aborted model @marknguyen1302 (#959)
- Add launchpad uninstaller prerm and postrm @hiento09 (#957)
- feat: support martian nvidia engine @marknguyen1302 (#956)
- chore: delete local file when abort download @marknguyen1302 (#954)
- fix: terminate system should kill cortex process @louis-jan (#955)
- fix: start and run models are not outputting last error logs @louis-jan (#951)
- fix: remove hardcode stream @marknguyen1302 (#952)
- chore: persist context length and ngl from gguf file @louis-jan (#947)
- chore: show error from remote engine, lint @marknguyen1302 (#949)
- Fix openai api pipeline @hiento09 (#948)
- chore: destroy dangling processes on uninstall @louis-jan (#945)
- feat: support openrouter, cohere engine @marknguyen1302 (#946)
- feature: support local model pull @louis-jan (#944)
- chore: handle failed download @marknguyen1302 (#943)
- chore: specify engine version to pull @louis-jan (#942)
- bump cortex llamacpp for llama 3.1 @Van-QA (#941)
Contributor
@Van-QA, @hiento09, @irfanpena, @louis-jan and @marknguyen1302
0.5.0-8
Changes
- feat: watch models and engines update for proper data retrieval @louis-jan (#960)
- fix: fix issue when re-download aborted model @marknguyen1302 (#959)
- Add launchpad uninstaller prerm and postrm @hiento09 (#957)
- feat: support martian nvidia engine @marknguyen1302 (#956)
- chore: delete local file when abort download @marknguyen1302 (#954)
- fix: terminate system should kill cortex process @louis-jan (#955)
- fix: start and run models are not outputting last error logs @louis-jan (#951)
- fix: remove hardcode stream @marknguyen1302 (#952)
- chore: persist context length and ngl from gguf file @louis-jan (#947)
- chore: show error from remote engine, lint @marknguyen1302 (#949)
- Fix openai api pipeline @hiento09 (#948)
- chore: destroy dangling processes on uninstall @louis-jan (#945)
- feat: support openrouter, cohere engine @marknguyen1302 (#946)
- feature: support local model pull @louis-jan (#944)
- chore: handle failed download @marknguyen1302 (#943)
- chore: specify engine version to pull @louis-jan (#942)
- bump cortex llamacpp for llama 3.1 @Van-QA (#941)
Contributor
@Van-QA, @hiento09, @hientominh, @louis-jan and @marknguyen1302