Ando K., Ueyoshi K., Oba Y., Hirose K., Uematsu R., Kudo T., Ikebe M., Asai T., Takamaeda-Yamazaki S., and Motomura M., "Dither NN: hardware/algorithm co-design for accurate quantized neural networks," IEICE Transactions on Information and Systems, vol. E102, pp. 2341-2353 (2019).
Ueyoshi K., Ando K., Hirose K., Takamaeda-Yamazaki S., Hamada M., Kuroda T., and Motomura M., "QUEST: Multi-purpose log-quantized DNN inference engine stacked on 96-MB 3-D SRAM using inductive coupling technology in 40-nm CMOS," IEEE Journal of Solid-State Circuits, vol. 54, no. 1, pp. 186-196 (2019).
Hirose K., Uematsu R., Ando K., Ueyoshi K., Ikebe M., Asai T., Motomura M., and Takamaeda-Yamazaki S., "Quantization error-based regularization for hardware-aware neural network training," Nonlinear Theory and Its Applications, vol. E9-N, no. 4, pp. 453-465 (2018).
Ando K., Ueyoshi K., Orimo K., Yonekawa H., Sato S., Nakahara H., Takamaeda-Yamazaki S., Ikebe M., Asai T., Kuroda T., and Motomura M., "BRein memory: a single-chip binary/ternary reconfigurable in-memory deep neural network accelerator achieving 1.4TOPS at 0.6W," IEEE Journal of Solid-State Circuits, vol. 53, no. 4, pp. 983-994 (2018).
Marukame T., Ueyoshi K., Asai T., Motomura M., Schmid A., Suzuki M., Higashi Y., and Mitani Y., "Error tolerance analysis of deep learning hardware using restricted Boltzmann machine towards low-power memory implementation," IEEE Transactions on Circuits and Systems II, vol. 64, no. 4, pp. 462-466 (2017).
Ueyoshi K., Marukame T., Asai T., Motomura M., and Schmid A., "FPGA implementation of a scalable and highly parallel architecture for restricted Boltzmann machines," Circuits and Systems, vol. 7, no. 9, pp. 2132-2141 (2016).
Ueyoshi K., Marukame T., Asai T., Motomura M., and Schmid A., "Robustness of hardware-oriented restricted Boltzmann machines in deep belief networks for reliable processing," Nonlinear Theory and Its Applications, vol. E7-N, no. 3, pp. 395-406 (2016).
植吉 晃大, "制約付きボルツマンマシンのスケーラブル並列アーキテクチャとそのメモリエラー耐性," 東芝研究開発センター LSI基盤技術ラボラトリーセミナー, TOSHIBA Corporate Research & Development Center, Kawasaki, Japan (Mar. 22, 2016).
国際会議
Shiba K., Omori T., Ueyoshi K., Ando K., Hirose K., Takamaeda-Yamazaki S., Motomura M., Hamada M., and Kuroda T., "A 3D-Stacked SRAM using Inductive Coupling with Low-Voltage Transmitter and 12:1 SerDes," 2020 IEEE International Symposium on Circuits and Systems (ISCAS), Online, Seville, Spain (Oct. 10-21, 2020).
Ando K., Ueyoshi K., Oba Y., Hirose K., Uematsu R., Kudo T., Ikebe M., Asai T., Takamaeda-Yamazaki S., and Motomura M., "Dither NN: an accurate neural network with dithering for low bit-precision hardware," The 2018 International Conference on Field-Programmable Technology (FPT'18), Tenbusu-Naha Hall, Naha, Japan (Dec. 10-14, 2018).
Kudo T., Ueyoshi K., Ando K., Hirose K., Uematsu R., Oba Y., Ikebe M., Asai T., Motomura M., and Takamaeda-Yamazaki S., "Area and energy optimization for bit-serial log-quantized DNN Accelerator with shared accumulators," IEEE 12th International Symposium on Embedded Multicore/Many-core Systems-on-Chip, Vietnam National University, Hanoi, Vietnam (Sep. 12-14, 2018).
Ueyoshi K., Ando K., Hirose K., Takamaeda-Yamazaki S., Kadomoto J., Miyata T., Hamada M., Kuroda T., and Motomura M., "QUEST: A 7.49-TOPS Multi-Purpose Log-Quantized DNN Inference Engine Stacked on 96MB 3D SRAM using Inductive-Coupling Technology in 40nm CMOS," 2018 International Solid-State Circuits Conference (ISSCC 2018), San Francisco Marriott Marquis, San Francisco, US (Feb. 11-15, 2018).
Hida I., Ueyoshi K., Takamaeda-Yamazaki S., Ikebe M., Motomura M., and Asai T., "Sign-invariant unsupervised learning facilitates weighted-sum computation in analog neural-network devices," 2017 International Symposium on Nonlinear Theory and Its Applications, Cancun International Convention Center, Cancun, Mexico (Dec. 4-7, 2017).
Hirose K., Uematsu R., Ando K., Orimo K., Ueyoshi K., Ikebe M., Asai T., Takamaeda-Yamazaki S., and Motomura M., "Logarithmic Compression for Memory Footprint Reduction in Neural Network Training," 5th International Workshop on Computer Systems and Architectures (CSA 2017), Aomori Prefecture Tourist Center, Aomori, Japan (Nov. 19-22, 2017).
Ando K., Ueyoshi K., Hirose K., Orimo K., Yonekawa H., Sato S., Nakahara H., Ikebe M., Takamaeda-Yamazaki S., Asai T., Kuroda T., and Motomura M., "In-Memory Area-Efficient Signal Streaming Processor Design for Binary Neural Networks," 60th IEEE International Midwest Symposium on Circuits and Systems (MWSCAS 2017), Tufts University, Boston, USA (Aug. 6-9, 2017).
Ando K., Ueyoshi K., Orimo K., Yonekawa H., Sato S., Nakahara H., Ikebe M., Asai T., Takamaeda-Yamazaki S., Kuroda T., and Motomura M., "BRein memory: a 13-layer 4.2 K neuron/0.8 M synapse binary/ternary reconfigurable in-memory deep neural network accelerator in 65 nm CMOS," 2017 Symposia on VLSI Technology and Circuits, Rihga Royal Hotel, Kyoto, Japan (Jun. 5-8, 2017).
Ueyoshi K., Marukame T., Asai T., Motomura M., and Schmid A., "Feature extraction system using restricted Boltzmann machines on FPGA," 2017 IEEE International Symposium on Circuits & Systems, A4P-O, Baltimore Marriott Waterfront, Baltimore, USA (May 28-31, 2017).
Ueyoshi K., Ando K., Orimo K., Ikebe M., Asai T., and Motomura M., "Exploring optimized accelerator design for binarized convolutional neural networks," The 2017 International Joint Conference on Neural Networks, William A. Egan Civic and Convention Center, Alaska, USA (May 14-19, 2017).
Ando K., Ueyoshi K., Orimo K., Ikebe M., Takamaeda-Yamazaki S., Asai T., and Motomura M., "Throughput analysis of a data-flow reconfigurable array architecture for convolutional neural networks," The 5th RIEC International Symposium on Brain Functions and Brain Computer, Tohoku University, Sendai, Japan (Feb. 27-28, 2017).
Orimo K., Ando K., Ueyoshi K., Ikebe M., Asai T., and Motomura M., "FPGA architecture for feed-forward sequential memory network targeting long-term time-series forecasting," 2016 International Conference on Reconfigurable Computing and FPGAs, Iberostar Cancun hotel, Cancun, Mexico (Nov. 30-Dec. 2, 2016).
Ueyoshi K., Marukame T., Asai T., Motomura M., and Schmid A., "Memory-error tolerance of scalable and highly parallel architecture for restricted Boltzmann machines in deep belief network," IEEE International Symposium on Circuits and Systems, Montreal Sheraton Center, Montreal, Canada (May 22-25, 2016).
Ando K., Ueyoshi K., Oba Y., Hirose K., Uematsu R., Kudo T., Ikebe M., Asai T., Takamaeda S., and Motomura M., "Dither NN: An Accurate Neural Network with Dithering for Low Bit-Precision Hardware," FPT'18 - Best Paper Award, Dec. 13, 2018.
Ueyoshi K., "QUEST: A 7.49-TOPS Multi-Purpose Log-Quantized DNN Inference Engine Stacked on 96MB 3D SRAM using Inductive-Coupling Technology in 40nm CMOS," IEEE SSCS - Predoctoral Achievement Award, Dec. 1, 2018.
Ueyoshi K., Ando K., Hirose K., Takamaeda-Yamazaki S., Kadomoto J., Miyata T., Hamada M., Kuroda T., and Motomura M., "QUEST: A 7.49-TOPS Multi-Purpose Log-Quantized DNN Inference Engine Stacked on 96MB 3D SRAM using Inductive-Coupling Technology in 40nm CMOS," ISSCC 2018 Silkroad Award, Feb. 11, 2018.